Upload
duongphuc
View
220
Download
1
Embed Size (px)
Citation preview
GettingStartedwithKubernetes
TableofContents
GettingStartedwithKubernetes
Credits
AbouttheAuthor
Acknowledgments
AbouttheReviewer
www.PacktPub.com
Supportfiles,eBooks,discountoffers,andmore
Whysubscribe?
FreeaccessforPacktaccountholders
Preface
Whatthisbookcovers
Whatyouneedforthisbook
Whothisbookisfor
Conventions
Readerfeedback
Customersupport
Downloadingtheexamplecode
Errata
Piracy
Questions
1.KubernetesandContainerOperations
Abriefoverviewofcontainers
Whatisacontainer?
Whyarecontainerssocool?
AdvantagestoContinuousIntegration/ContinuousDeployment
Resourceutilization
Microservicesandorchestration
Futurechallenges
AdvantagesofKubernetes
Ourfirstcluster
KubernetesUI
Grafana
Swagger
Commandline
Servicesrunningonthemaster
Servicesrunningontheminions
Teardowncluster
Workingwithotherproviders
Resettingthecluster
Summary
Footnotes
References
2.Kubernetes–CoreConceptsandConstructs
Thearchitecture
Master
Node(formerlyminions)
Coreconstructs
Pods
Podexample
Labels
Thecontainer’safterlife
Services
Replicationcontrollers
OurfirstKubernetesapplication
Moreonlabels
Healthchecks
TCPchecks
Lifecyclehooksorgracefulshutdown
Applicationscheduling
Schedulingexample
Summary
Footnotes
3.CoreConcepts–Networking,Storage,andAdvancedServices
Kubernetesnetworking
Networkingcomparisons
Docker
Dockerplugins(libnetwork)
Weave
Flannel
ProjectCalico
Balanceddesign
Advancedservices
Externalservices
Internalservices
Customloadbalancing
Cross-nodeproxy
Customports
Multipleports
Migrations,multicluster,andmore
Customaddressing
Servicediscovery
DNS
Persistentstorage
Temporarydisks
Cloudvolumes
GCEpersistentdisks
AWSElasticBlockStore
OtherPDoptions
Multitenancy
Limits
Summary
Footnotes
4.UpdatesandGradualRollouts
Examplesetup
Scalingup
Smoothupdates
Testing,releases,andcutovers
Growingyourcluster
ScalinguptheclusteronGCE
Autoscalingandscalingdown
ScalinguptheclusteronAWS
Scalingmanually
Summary
5.ContinuousDelivery
Integrationwithcontinuousdelivery
Gulp.js
Prerequisites
Gulpbuildexample
KubernetespluginforJenkins
Prerequisites
Installingplugins
ConfiguringtheKubernetesplugin
Bonusfun
Summary
6.MonitoringandLogging
Monitoringoperations
Built-inmonitoring
ExploringHeapster
Customizingourdashboards
FluentDandGoogleCloudLogging
FluentD
Maturingourmonitoringoperations
GCE(StackDriver)
Sign-upforGCEmonitoring
Configuredetailedmonitoring
Alerts
BeyondsystemmonitoringwithSysdig
SysdigCloud
Detailedviews
Topologyviews
Metrics
Alerting
Kubernetessupport
TheSysdigcommandline
Thecsysdigcommand-lineUI
Summary
Footnotes
7.OCI,CNCF,CoreOS,andTectonic
Theimportanceofstandards
OpenContainerInitiative
CloudNativeComputingFoundation
Standardcontainerspecification
CoreOS
rkt
etcd
KuberneteswithCoreOS
Tectonic
Dashboardhighlights
Summary
Footnotes
8.TowardsProduction-Ready
Readyforproduction
Security
Ready,set,go
Third-partycompanies
Privateregistries
GoogleContainerEngine
Twistlock
Kismatic
Mesosphere(KubernetesonMesos)
Deis
OpenShift
Wheretolearnmore
Summary
Index
GettingStartedwithKubernetes
GettingStartedwithKubernetesCopyright©2015PacktPublishing
Allrightsreserved.Nopartofthisbookmaybereproduced,storedinaretrievalsystem,ortransmittedinanyformorbyanymeans,withoutthepriorwrittenpermissionofthepublisher,exceptinthecaseofbriefquotationsembeddedincriticalarticlesorreviews.
Everyefforthasbeenmadeinthepreparationofthisbooktoensuretheaccuracyoftheinformationpresented.However,theinformationcontainedinthisbookissoldwithoutwarranty,eitherexpressorimplied.NeithertheauthornorPacktPublishing,anditsdealersanddistributorswillbeheldliableforanydamagescausedorallegedtobecauseddirectlyorindirectlybythisbook.
PacktPublishinghasendeavoredtoprovidetrademarkinformationaboutallofthecompaniesandproductsmentionedinthisbookbytheappropriateuseofcapitals.However,PacktPublishingcannotguaranteetheaccuracyofthisinformation.
Firstpublished:December2015
Productionreference:1151215
PublishedbyPacktPublishingLtd.
LiveryPlace
35LiveryStreet
BirminghamB32PB,UK.
ISBN978-1-78439-403-5
www.packtpub.com
CreditsAuthor
JonathanBaier
Reviewer
GiragaduraiVallirajan
CommissioningEditor
DipikaGaonkar
AcquisitionEditor
IndrajitA.Das
ContentDevelopmentEditor
PoojaMhapsekar
TechnicalEditor
GauravSuri
CopyEditor
DiptiMankame
ProjectCoordinator
FrancinaPinto
Proofreader
SafisEditing
Indexer
PriyaSane
Graphics
KirkD’Penha
ProductionCoordinator
ShantanuN.Zagade
CoverWork
ShantanuN.Zagade
AbouttheAuthorJonathanBaierisaseniorcloudarchitectlivinginBrooklyn,NY.Hehashadapassionfortechnologysinceanearlyage.Whenhewas14yearsold,hewassointerestedinthefamilycomputer(anIBMPCjr)thathepouredthroughtheseveralhundredpagesofBASICandDOSmanuals.Then,hetaughthimselftocodeaverypoorly-writtenversionofTic-Tac-Toe.Duringhisteenyears,hestartedacomputersupportbusiness.Sincethen,hehasdabbledinentrepreneurshipseveraltimesthroughouthislife.HenowenjoysworkingforCloudTechnologyPartners,acloud-focusedprofessionalserviceandapplicationdevelopmentfirmheadquarteredinBoston.
Hehasoveradecadeofexperiencedeliveringtechnologystrategiesandsolutionsforbothpublicandprivatesectorbusinessesofallsizes.Hehasabreadthofexperienceworkingwithawidevarietyoftechnologiesandwithstakeholdersfromalllevelsofmanagement.
Workingintheareasofarchitecture,containerization,andcloudsecurity,hehascreatedstrategicroadmapstoguideandhelpmaturetheoverallITcapabilitiesofvariousenterprises.Furthermore,hehashelpedorganizationsofvarioussizesbuildandimplementtheircloudstrategyandsolvethemanychallengesthatarisewhen“designsonpaper”meetreality.
AcknowledgmentsAtremendousthankyoutomywonderfulwife,Tomoko,andmyplayfulson,Nikko.Youbothgavemeincrediblesupportandmotivationduringthewritingprocess.Thereweremanyearlymorning,longweekend,andlatenightwritingsessionsthatIcouldnothavedonewithoutyouboth.YoursmilesmovemountainsIcouldnotonmyown.Youaremytruenorthstarsandmyguidinglightinthestorm.
I’dalsoliketoextendspecialthankstoallmycolleaguesandfriendsatCloudTechnologyPartners,manyofwhomprovidedencouragementandsupportthroughouttheprocess.I’despeciallyliketothankMikeKavis,DavidLinthicum,AlanZall,LisaNoon,andCharlesRadi,whohelpedmemakethebooksomuchbetterwiththeirefforts.I’dalsoliketothanktheamazingCTPmarketingteam(BradYoung,ShannonCroy,andNicoleGivin)formakingmyworklookgreatontheWebandinfrontofthecamera.
AbouttheReviewerGiragaduraiVallirajanisaseasonedtechnologistandentrepreneur.Currently,heistheCTOofBluemericTechnologiesPvtLtd,Bangalore.Hehasmorethan12yearsofexperienceintheITindustryandhasworkedforFortune100companies,includingLehmanBrothers(Tokyo)andHewlett-Packard(Bangalore).Giragaduraihasconsiderableexpertiseinbigdataanalytics,predictiveanalytics,complexeventprocessing,andperformancetuningindistributedandcloudenvironments.Heisanentrepreneuratheart;hestartedananalyticsstart-up,VorthySoftwares(Singapore/India),beforejoiningBluemeric.
www.PacktPub.com
Supportfiles,eBooks,discountoffers,andmoreForsupportfilesanddownloadsrelatedtoyourbook,pleasevisitwww.PacktPub.com.
DidyouknowthatPacktofferseBookversionsofeverybookpublished,withPDFandePubfilesavailable?YoucanupgradetotheeBookversionatwww.PacktPub.comandasaprintbookcustomer,youareentitledtoadiscountontheeBookcopy.Getintouchwithusat<[email protected]>formoredetails.
Atwww.PacktPub.com,youcanalsoreadacollectionoffreetechnicalarticles,signupforarangeoffreenewslettersandreceiveexclusivediscountsandoffersonPacktbooksandeBooks.
https://www2.packtpub.com/books/subscription/packtlib
DoyouneedinstantsolutionstoyourITquestions?PacktLibisPackt’sonlinedigitalbooklibrary.Here,youcansearch,access,andreadPackt’sentirelibraryofbooks.
Whysubscribe?FullysearchableacrosseverybookpublishedbyPacktCopyandpaste,print,andbookmarkcontentOndemandandaccessibleviaawebbrowser
FreeaccessforPacktaccountholdersIfyouhaveanaccountwithPacktatwww.PacktPub.com,youcanusethistoaccessPacktLibtodayandview9entirelyfreebooks.Simplyuseyourlogincredentialsforimmediateaccess.
PrefaceThisbookisaguidetogettingstartedwithKubernetesandoverallcontainermanagement.WewillwalkyouthroughthefeaturesandfunctionsofKubernetesandshowhowitfitsintoanoveralloperationsstrategy.You’lllearnwhathurdleslurkinmovingcontaineroffthedeveloper’slaptopandmanagingthematalargerscale.You’llalsoseehowKubernetesistheperfecttooltohelpyoufacethesechallengeswithconfidence.
WhatthisbookcoversChapter1,KubernetesandContainerOperations,providesabriefoverviewofcontainersandthehow,what,andwhyofKubernetesorchestration.Itexploreshowitimpactsyourbusinessgoalsandeverydayoperations.
Chapter2,Kubernetes–CoreConceptsandConstructs,willexplorecoreKubernetesconstructs,suchaspods,services,replicationcontrollers,andlabelsusingafewsimpleexamples.Basicoperations,includinghealthchecksandscheduling,willalsobecovered.
Chapter3,CoreConcepts–Networking,Storage,andAdvancedServices,coversclusternetworkingforKubernetesandtheKubernetesproxy,adeeperdiveintoservices,storageconcerns,persistentdataacrosspods,andthecontainerlifecycles.Finishingup,wewillseeabriefoverviewofsomehigherlevelisolationfeaturesformutlitenancy.
Chapter4,UpdatesandGradualRollouts,takesaquicklookathowtorolloutupdatesandnewfeatureswithminimaldisruptiontouptime.WewillalsolookatscalingtheKubernetescluster.
Chapter5,ContinuousDelivery,willcoverintegrationofKubernetesintoyourcontinuousdeliverypipeline.WewillseehowtouseaK8sclusterwithGulp.jsandJenkinsaswell.
Chapter6,MonitoringandLogging,teachesyouhowtouseandcustomizebuilt-inandthird-partymonitoringtoolsonyourKubernetescluster.Wewilllookatbuilt-inloggingandmonitoring,theGoogleCloudLoggingservice,andSysdig.
Chapter7,OCI,CNCF,CoreOS,andTectonic,discovershowopenstandardsbenefittheentirecontainerecosystem.We’lllookatafewoftheprominentstandardsorganizationsandcoverCoreOSandTectonic.Also,wewillexploretheiradvantagesasahostOSandenterpriseplatform.
Chapter8,TowardsProduction-Ready,showssomeofthehelpfultoolsandthird-partyprojectsavailableandwhereyoucangotogetmorehelp.
WhatyouneedforthisbookThisbookwillcoverdownloadingandrunningtheKubernetesproject.You’llneedaccesstoaLinuxsystem(VirtualBoxwillworkifyouareonwindows)andsomefamiliaritywiththecommandshell.
Inaddition,youshouldhaveatleastaGoogleCloudPlatformaccount.Youcansignupforafreetrialhere:
https://cloud.google.com/
Also,anAWSaccountisnecessaryforafewsectionsofthebook.Youcanalsosignupforafreetrialhere:
https://aws.amazon.com/
WhothisbookisforAlthoughyou’reinheadsdownindevelopment,neckdeepinoperations,orlookingforwardasanexecutive,Kubernetesandthisbookareforyou.GettingStartedwithKuberneteswillhelpyouunderstandhowtomoveyourcontainerapplicationsintoproductionwithbestpracticesandstep-by-stepwalk-throughstiedtoareal-worldoperationalstrategy.You’lllearnhowKubernetesfitsintoyoureverydayoperationsandcanhelpyouprepareforproduction-readycontainerapplicationstacks.
ItwillbehelpfultohavesomefamiliaritywithDockercontainers,generalsoftwaredevelopments,andoperationsatahighlevel.
ConventionsInthisbook,youwillfindanumberoftextstylesthatdistinguishbetweendifferentkindsofinformation.Herearesomeexamplesofthesestylesandanexplanationoftheirmeaning.
Codewordsintext,foldernames,filenames,fileextensions,andpathnamesareshownasfollows:“Youcanalsousethescalecommandtoreducethenumberofreplicas.”
URLsareshownasfollows:https://docs.docker.com/installation/
IfwewishyoutouseaURLafterreplacingaportionofitwithyourownvalues,itwillbeshownlikethis:https://<yourmasterip>/swagger-ui/
Resourcedefinitionfilesandothercodeblocksaresetasfollows:
apiVersion:v1
kind:Pod
metadata:
name:node-js-pod
spec:
containers:
-name:node-js-pod
image:bitnami/apache:latest
ports:
-containerPort:80
Whenwewishyoutoreplaceaportionofthelistingwithyourownvalue,therelevantlinesoritemsaresetinboldbetweenlessthanandgreaterthansymbols:
subsets:
-addresses:
-IP:<X.X.X.X>
ports:
-name:http
port:80
protocol:TCP
Anycommand-lineinputoroutputiswrittenasfollows:
$kubectlgetpods
Newtermsandimportantwordsareshowninbold.Wordsthatyouseeonthescreen,forexample,inmenusordialogboxes,appearinthetextlikethis:“WecanmodifythisgroupbyclickingtheEditgroupbuttonatthetopofthepage.”
NoteWarningsorimportantnotesappearinaboxlikethis.
Tip
Tipsandtricksappearlikethis.
ReaderfeedbackFeedbackfromourreadersisalwayswelcome.Letusknowwhatyouthinkaboutthisbook—whatyoulikedordisliked.Readerfeedbackisimportantforusasithelpsusdeveloptitlesthatyouwillreallygetthemostoutof.
Tosendusgeneralfeedback,simplye-mail<[email protected]>,andmentionthebook’stitleinthesubjectofyourmessage.
Ifthereisatopicthatyouhaveexpertiseinandyouareinterestedineitherwritingorcontributingtoabook,seeourauthorguideatwww.packtpub.com/authors.
CustomersupportNowthatyouaretheproudownerofaPacktbook,wehaveanumberofthingstohelpyoutogetthemostfromyourpurchase.
DownloadingtheexamplecodeYoucandownloadtheexamplecodefilesfromyouraccountathttp://www.packtpub.comforallthePacktPublishingbooksyouhavepurchased.Ifyoupurchasedthisbookelsewhere,youcanvisithttp://www.packtpub.com/supportandregistertohavethefilese-maileddirectlytoyou.
ErrataAlthoughwehavetakeneverycaretoensuretheaccuracyofourcontent,mistakesdohappen.Ifyoufindamistakeinoneofourbooks—maybeamistakeinthetextorthecode—wewouldbegratefulifyoucouldreportthistous.Bydoingso,youcansaveotherreadersfromfrustrationandhelpusimprovesubsequentversionsofthisbook.Ifyoufindanyerrata,pleasereportthembyvisitinghttp://www.packtpub.com/submit-errata,selectingyourbook,clickingontheErrataSubmissionFormlink,andenteringthedetailsofyourerrata.Onceyourerrataareverified,yoursubmissionwillbeacceptedandtheerratawillbeuploadedtoourwebsiteoraddedtoanylistofexistingerrataundertheErratasectionofthattitle.
Toviewthepreviouslysubmittederrata,gotohttps://www.packtpub.com/books/content/supportandenterthenameofthebookinthesearchfield.TherequiredinformationwillappearundertheErratasection.
PiracyPiracyofcopyrightedmaterialontheInternetisanongoingproblemacrossallmedia.AtPackt,wetaketheprotectionofourcopyrightandlicensesveryseriously.IfyoucomeacrossanyillegalcopiesofourworksinanyformontheInternet,pleaseprovideuswiththelocationaddressorwebsitenameimmediatelysothatwecanpursuearemedy.
Pleasecontactusat<[email protected]>withalinktothesuspectedpiratedmaterial.
Weappreciateyourhelpinprotectingourauthorsandourabilitytobringyouvaluablecontent.
QuestionsIfyouhaveaproblemwithanyaspectofthisbook,youcancontactusat<[email protected]>,andwewilldoourbesttoaddresstheproblem.
Chapter1.KubernetesandContainerOperationsThischapterwillgiveabriefoverviewofcontainersandhowtheyworkaswellaswhymanagementandorchestrationisimportanttoyourbusinessand/orprojectteam.ThechapterwillalsogiveabriefoverviewofhowKubernetesorchestrationcanenhanceourcontainermanagementstrategyandhowwecangetabasicKubernetesclusterup,running,andreadyforcontainerdeployments.
Thischapterwillincludethefollowingtopics:
IntroducingcontaineroperationsandmanagementWhycontainermanagementisimportantAdvantagesofKubernetesDownloadingthelatestKubernetesInstallingandstartingupanewKubernetescluster
AbriefoverviewofcontainersOverthepasttwoyears,containershavegrowninpopularitylikewildfire.Youwouldbehard-pressedtoattendanITconferencewithoutfindingpopularsessionsonDockerorcontainersingeneral.
Dockerliesattheheartofthemassadoptionandtheexcitementinthecontainerspace.AsMalcomMcleanrevolutionizedthephysicalshippingworldin1957bycreatingastandardizedshippingcontainer,whichisusedtodayforeverythingfromicecubetraystoautomobiles1,Linuxcontainersarerevolutionizingthesoftwaredevelopmentworldbymakingapplicationenvironmentsportableandconsistentacrosstheinfrastructurelandscape.Asanorganization,Dockerhastakentheexistingcontainertechnologytoanewlevelbymakingiteasytoimplementandreplicateacrossenvironmentsandproviders.
Whatisacontainer?AtthecoreofcontainertechnologyarecGroupsandnamespaces.Additionally,Dockerusesunionfilesystemsforaddedbenefitstothecontainerdevelopmentprocess.
Controlgroups(cGroups)workbyallowingthehosttoshareandalsolimittheresourceseachprocessorcontainercanconsume.Thisisimportantforboth,resourceutilizationandsecurity,asitpreventsdenial-of-serviceattacksonthehost’shardwareresources.SeveralcontainerscanshareCPUandmemorywhilestayingwithinthepredefinedconstraints.
Namespacesofferanotherformofisolationinthewayofprocesses.ProcessesarelimitedtoseeonlytheprocessIDinthesamenamespace.Namespacesfromothersystemprocesseswouldnotbeaccessiblefromacontainerprocess.Forexample,anetworknamespacewouldisolateaccesstothenetworkinterfacesandconfiguration,whichallowstheseparationofnetworkinterfaces,routes,andfirewallrules.
Figure1.1.Compositionofacontainer
UnionfilesystemsarealsoakeyadvantagetousingDockercontainers.Theeasiestwaytounderstandunionfilesystemsistothinkofthemlikealayercakewitheachlayerbakedindependently.TheLinuxkernelisourbaselayer;then,wemightaddanOSlikeRedHatLinuxorUbuntu.Next,wemightaddanapplicationlikeNginxorApache.Everychangecreatesanewlayer.Finally,asyoumakechangesandnewlayersareadded,you’llalwayshaveatoplayer(thinkfrosting)thatisawritablelayer.
Figure1.2.Layeredfilesystem
WhatmakesthistrulyefficientisthatDockercachesthelayersthefirsttimewebuildthem.So,let’ssaythatwehaveanimagewithUbuntuandthenaddApacheandbuildtheimage.Next,webuildMySQLwithUbuntuasthebase.ThesecondbuildwillbemuchfasterbecausetheUbuntulayerisalreadycached.Essentially,ourchocolateandvanillalayers,fromFigure1.2,arealreadybaked.Wesimplyneedtobakethepistachio(MySQL)layer,assemble,andaddtheicing(writablelayer).
Whyarecontainerssocool?Containersontheirownarenotanewtechnologyandhaveinfactbeenaroundformanyyears.WhattrulysetsDockerapartisthetoolingandeaseofusetheyhavebroughttocommunity.
AdvantagestoContinuousIntegration/ContinuousDeploymentWikipediadefinesContinuousIntegrationas“thepractice,insoftwareengineering,ofmergingalldeveloperworkingcopiestoasharedmainlineseveraltimesaday.”Byhavingacontinuousprocessofbuildinganddeployingcodeorganizationsareabletoinstillqualitycontrolandtestingaspartoftheeverydayworkcycle.Theresultisthatupdatesandbugfixeshappenmuchfasterandoverallqualityimproves.
However,therehasalwaysbeenachallengeinsettingdevelopmentenvironmentstomatchthatoftestingandproduction.Ofteninconsistenciesintheseenvironmentsmakeitdifficulttogainthefulladvantageofcontinuousdelivery.
UsingDocker,developersarenowabletohavetrulyportabledeployments.Containersthataredeployedonadeveloper’slaptopareeasilydeployedonanin-housestagingserver.Theyaretheneasilytransferredtotheproductionserverrunninginthecloud.ThisisbecauseDockerbuildscontainersupwithbuildfilesthatspecifyparentlayers.OneadvantageofthisisthatitbecomesveryeasytoensureOS,package,andapplicationversionsarethesameacrossdevelopment,staging,andproductionenvironments.
Becauseallthedependenciesarepackagedintothelayer,thesamehostservercanhavemultiplecontainersrunningavarietyofOSorpackageversions.Further,wecanhavevariouslanguagesandframeworksonthesamehostserverwithoutthetypicaldependencyclasheswewouldgetinaVirtualMachine(VM)withasingleoperatingsystem.
ResourceutilizationThewell-definedisolationandlayerfilesystemalsomakecontainersidealforrunningsystemswithaverysmallfootprintanddomain-specificpurposes.Astreamlineddeploymentandreleaseprocessmeanswecandeployquicklyandoften.Assuch,manycompanieshavereducedtheirdeploymenttimefromweeksormonthstodaysandhoursinsomecases.Thisdevelopmentlifecyclelendsitselfextremelywelltosmall,targetedteamsworkingonsmallchunksofalargerapplication.
MicroservicesandorchestrationAswebreakdownanapplicationintoveryspecificdomains,weneedauniformwaytocommunicatebetweenallthevariouspiecesanddomains.Webserviceshaveservedthispurposeforyears,buttheaddedisolationandgranularfocusthatcontainersbringhavepavedawayforwhatisbeingnamedmicroservices.
Thedefinitionformicroservicescanbeabitnebulous,butadefinitionfromMartinFowler,arespectedauthorandspeakeronsoftwaredevelopment,says2:
“Inshort,themicroservicearchitecturalstyleisanapproachtodevelopingasingleapplicationasasuiteofsmallservices,eachrunninginitsownprocessandcommunicatingwithlightweightmechanisms,oftenanHTTPresourceAPI.Theseservicesarebuiltaroundbusinesscapabilitiesandindependentlydeployablebyfullyautomateddeploymentmachinery.Thereisabareminimumofcentralizedmanagementoftheseservices,whichmaybewrittenindifferentprogramminglanguagesandusedifferentdatastoragetechnologies.”
Asthepivottocontainerizationandmicroservicesevolvesinanorganization,theywillsoonneedastrategytomaintainmanycontainersandmicroservices.Someorganizationswillhavehundredsoreventhousandsofcontainersrunningintheyearsahead.
FuturechallengesLifecycleprocessesaloneareanimportantpieceofoperationsandmanagement.Howwillweautomaticallyrecoverwhenacontainerfails?Whichupstreamservicesareaffectedbysuchanoutage?Howwillwepatchourapplicationswithminimaldowntime?Howwillwescaleupourcontainersandservicesasourtrafficgrows?
Networkingandprocessingarealsoimportantconcerns.Someprocessesarepartofthesameserviceandmaybenefitfromproximityonthenetwork.Databases,forexample,maysendlargeamountsofdatatoaparticularmicroserviceforprocessing.Howwillweplacecontainersneareachotherinourcluster?Istherecommondatathatneedstobeaccessed?Howwillnewservicesbediscoveredandmadeavailabletoothersystems?
Resourceutilizationisalsoakey.Thesmallfootprintofcontainersmeansthatwecanoptimizeourinfrastructureforgreaterutilization.Extendingthesavingsstartedintheelasticcloudworldevenfurthertowardsminimizingwastedhardware.Howwillwescheduleworkloadsmostefficiently?Howwillweensurethatourimportantapplicationsalwayshavetheresources?Howcanwerunlessimportantworkloadsonsparecapacity?
Finally,portabilityisakeyfactorinmovingmanyorganizationstocontainerization.Dockermakesitveryeasytodeployastandardcontaineracrossvariousoperatingsystems,cloudproviders,andon-premisehardware,orevendeveloperlaptops.However,westillneedtoolingtomovecontainersaround.Howwillwemovecontainersbetweendifferentnodesonourcluster?Howwillwerolloutupdateswithminimaldisruption?Whatprocessdoweusetoperformblue-greendeploymentsorcanaryreleases?
Whetheryouarestartingtobuildoutindividualmicroservicesandseparatingconcernsintoisolatedcontainersorifyousimplywanttotakefulladvantageoftheportabilityandimmutabilityinyourapplicationdevelopment,theneedformanagementandorchestrationbecomesclear.
AdvantagesofKubernetesThisiswhereorchestrationtoolssuchasKubernetesofferthebiggestvalue.Kubernetes(K8s)isanopensourceprojectthatwasreleasedbyGoogleinJune,2014.Googlereleasedtheprojectaspartofanefforttosharetheirowninfrastructureandtechnologyadvantagewiththecommunityatlarge.
Googlelaunches2billioncontainersaweekintheirinfrastructureandhasbeenusingcontainertechnologyforoveradecade.OriginallytheywerebuildingasystemnamedBorg,andnowOmega,toscheduletheirvastquantitiesofworkloadsacrosstheirever-expandingdatacenterfootprint.Theytookmanyofthelessonstheylearnedovertheyearsandrewrotetheirexistingdatacentermanagementtoolforwideadoptionbytherestoftheworld.TheresultwastheKubernetesopensourceproject3.
Sinceitsinitialreleasein2014,K8shasundergonerapiddevelopmentwithcontributionsallacrosstheopensourcecommunity,includingRedHat,VMware,andCanonical.The1.0releaseofKuberneteswentliveinJuly,2015.We’llbecoveringversion1.0throughoutthebook.K8sgivesorganizationsatooltodealwithsomeofthemajoroperationsandmanagementconcerns.WewillexplorehowKuberneteshelpsdealwithresourceutilization,highavailability,updates,patching,networking,servicediscovery,monitoring,andlogging.
OurfirstclusterKubernetesissupportedonavarietyofplatformsandOSes.Fortheexamplesinthisbook,IusedanUbuntu14.04LinuxVirtualBoxformyclientandGoogleComputeEngine(GCE)withDebianfortheclusteritself.WewillalsotakeabrieflookataclusterrunningonAmazonWebServices(AWS)withUbuntu.
TipMostoftheconceptsandexamplesinthisbookshouldworkonanyinstallationofaKubernetescluster.Togetmoreinformationonotherplatformsetups,checktheKubernetesgettingstartedpageonthefollowingGitHublink:
https://github.com/GoogleCloudPlatform/kubernetes/blob/v1.0.0/docs/getting-started-guides/README.md
First,let’smakesurethatourenvironmentisproperlysetupbeforeweinstallKubernetes.
Startbyupdatingpackages:
$sudoapt-getupdate
InstallPythonandcurliftheyarenotpresent:
$sudoapt-getinstallpython
$sudoapt-getinstallcurl
InstallthegcloudSDK:
$curlhttps://sdk.cloud.google.com|bash
TipWewillneedtostartanewshellbeforegcloudisonourpath.
ConfigureyourGoogleCloudPlatform(GCP)accountinformation.ThisshouldautomaticallyopenabrowserwherewecanlogintoourGoogleCloudaccountandauthorizetheSDK:
$gcloudauthlogin
TipIfyouhaveproblemswithloginorwanttouseanotherbrowser,youcanoptionallyusethe--no-launch-browsercommand.CopyandpastetheURLtothemachineand/orbrowserofyourchoice.LoginwithyourGoogleCloudcredentialsandclickonAllowonthepermissionspage.Finally,youshouldreceiveanauthorizationcodethatyoucancopyandpastebackintotheshellwherethepromptiswaiting.
Adefaultprojectshouldbeset,butwecancheckthiswiththefollowing:
$gcloudconfiglistproject
Wecanmodifythisandsetanewdefaultprojectwiththiscommand.Makesuretouse
projectIDandnotprojectname,asfollows:
$gcloudconfigsetproject<PROJECTID>
TipWecanfindourprojectIDintheconsoleat:
https://console.developers.google.com/project
Alternatively,wecanlistactiveprojects:
$gcloudalphaprojectslist
Nowthatwehaveourenvironmentsetup,installingthelatestKubernetesversionisdoneinasinglestepasfollows:
$curl-sShttps://get.k8s.io|bash
ItmaytakeaminuteortwotodownloadKubernetesdependingonyourconnectionspeed.Afterthis,itwillautomaticallycallthekube-up.shscriptandstartbuildingourcluster.Bydefault,itwillusetheGoogleCloudandGCE.
TipIfsomethingfailsduringtheclustersetupandyouneedtostartagain,youcansimplyrunthekube-up.shscript.Gotothefolderwhereyouranthepreviouscurlcommand.Then,youcankickofftheclusterbuildwiththefollowingcommand:
$kubernetes/cluster/kube-up.sh
AfterKubernetesisdownloadedandthekube-up.shscripthasstarted,wewillseequiteafewlinesrollpast.Let’stakealookatthemonesectionatatime.
Figure1.3.GCEprerequisitecheck
TipIfyourgcloudcomponentsarenotuptodate,youmaybepromptedtoupdate.
Theprecedingsection(Figure1.3)showsthechecksforprerequisitesaswellasmakessurethatallcomponentsareuptodate.Thisisspecifictoeachprovider.InthecaseofGCE,itwillcheckthattheSDKisinstalledandthatallcomponentsareuptodate.Ifnot,youwillseeapromptatthispointtoinstallorupdate.
Figure1.4.Uploadclusterpackages
Nowthescriptisturningupthecluster.Again,thisisspecifictotheprovider.ForGCE,itfirstcheckstomakesurethattheSDKisconfiguredforadefaultprojectandzone.Iftheyareset,you’llseethoseintheoutput.
Next,ituploadstheserverbinariestoGoogleCloudstorage,asseenintheCreatinggs:\…lines.
Figure1.5.Mastercreation
Itthenchecksforanypiecesofaclusteralreadyrunning.Then,wefinallystartcreatingthecluster.IntheoutputinFigure1.5,weseeitcreatingthemasterserver,IPaddress,andappropriatefirewallconfigurationsforthecluster.
Figure1.6.Minioncreation
Finally,itcreatestheminionsornodesforourcluster.Thisiswhereourcontainerworkloadswillactuallyrun.Itwillcontinuallyloopandwaitwhilealltheminionsstartup.Bydefault,theclusterwillhavefournode(minions),butK8ssupportshavingupwardsof100(andsoonbeyond1000).Wewillcomebacktoscalingthenodeslateroninthebook.
Figure1.7.Clustercompletion
Nowthateverythingiscreated,theclusterisinitializedandstarted.Assumingthateverythinggoeswell,wewillgetanIPaddressforthemaster.Also,notethatconfigurationalongwiththeclustermanagementcredentialsarestoredinhome/<Username>/.kube/config.
Figure1.8.Clustervalidation
Then,thescriptwillvalidatethecluster.Atthispoint,wearenolongerrunningprovider-specificcode.Thevalidationscriptwillquerytheclusterviathekubectl.shscript.Thisisthecentralscriptformanagingourcluster.Inthiscase,itchecksthenumberofminionsfound,registered,andinareadystate.Itloopsthroughgivingtheclusterupto10minutestofinishinitialization.
Afterasuccessfulstartup,asummaryoftheminionsandtheclustercomponenthealthisprintedtothescreen:
Figure1.9.Clustersummary
Finally,akubectlcluster-infocommandisrun,whichoutputstheURLforthemasterservicesaswellasDNS,UI,andmonitoring.Let’stakealookatsomeofthesecomponents.
KubernetesUIOpenabrowserandtrythefollowingcode:https://<yourmasterip>/api/v1/proxy/namespaces/kube-system/services/kube-
ui
Thecertificateisself-signedbydefault,soyou’llneedtoignorethewarningsinyourbrowserbeforeproceeding.Afterthis,wewillseealogindialog.ThisiswhereweusethecredentialslistedduringtheK8sinstallation.Wecanfindthematanytimebysimplyusingtheconfigcommand:
$kubectlconfigview
Nowthatwehavecredentialsforlogin,usethose,andweshouldseeadashboardlikethefollowingimage:
Figure1.10.KubernetesUIdashboard
Themaindashboardpagegivesusasummaryoftheminions(orslavenodes).WecanalsoseetheCPU,memory,anduseddiskspaceoneachminionaswelltheIPaddress.
TheUIhasanumberofbuilt-inviewslistedundertheViewsdropdownmenuonthetoprightofthescreen.However,mostofthemwillbeemptybydefault.Onceworkloadsandservicesarespunup,theseviewswillbecomealotmoreinteresting.
GrafanaAnotherserviceinstalledbydefaultisGrafana.Thistoolwillgiveusadashboardtoviewmetricsontheclusternodes.Wecanaccessitbyusingthefollowingsyntaxinabrowser:https://<yourmasterip>/api/v1/proxy/namespaces/kube-
system/services/monitoring-grafana
Figure1.11.KubernetesGrafanadashboard
Here,Kubernetesisactuallyrunninganumberofservices.HeapsterisusedtocollectresourceusageonthepodsandnodesandstorestheinformationinInfluxDB.The
results,likeCPUandmemoryusage,arewhatweseeintheGrafanaUI.WewillexplorethisindepthinChapter6,MonitoringandLogging.
SwaggerSwagger(http://swagger.io/)isatooltoaddahigherlevelofinteractionandeasydiscoverytoanAPI.
KuberneteshasbuiltaSwagger-enabledAPI,whichcanbeaccessedbyusinghttps://<yourmasterip>/swagger-ui/.
Figure1.12.KubernetesSwaggerdashboard
Throughthisinterface,youcanlearnalotabouttheKubernetesRESTfulAPI.Thebulkoftheinterestingendpointsarelistedunderv1.Ifwelookat/api/v1/nodes,wecanseethestructureoftheJSONresponseaswellasdetailsofpossibleparametersfortherequest.Inthiscase,weseethatthefirstparameterispretty,whichtoggleswhethertheJSONisreturnedwithprettyindentationforeasierreading.
Wecantrythisoutbyusinghttps://<yourmasterip>/api/v1/nodes/.
Bydefault,we’llseeaJSONresponsewithprettyindentationenabled.Theresponseshouldhavealistofallthenodescurrentlyinourcluster.
Now,let’strytweakingtheprettyrequestparameteryoujustlearnedabout.Usehttps://<yourmasterip>/api/v1/nodes/?pretty=false.
Nowwehavethesameresponseoutput,butwithnoindentation.ThisisagreatresourceforexploringtheAPIandlearninghowtousevariousfunctioncallstogetmoreinformationandinteractwithyourclusterprogrammatically.
CommandlineThekubectl.shscripthascommandstoexploreourclusterandtheworkloadsrunningonit.Wewillbeusingthiscommandthroughoutthebook,solet’stakeasecondtosetupourenvironment.WecandosobymakingthescriptexecutableandputtingitonourPATH,inthefollowingmanner:
$cd/home/<Username>/kubernetes/cluster
$chmod+xkubectl.sh
$exportPATH=$PATH:/home/<Username>/kubernetes/cluster
$ln-skubectl.shkubectl
TipYoumaychoosetodownloadthekubernetesfolderoutsideyourhomefolder,somodifytheprecedingcommandsasappropriate.
Itisalsoagoodideatomakethechangespermanentbyaddingtheexportcommandtotheendofyour.bashrcfileinyourhomedirectory.
Nowthatwehavekubectlonourpath,wecanstartworkingwithit.Ithasquiteafewcommands.Sincewehavenotspunupanyapplicationsyet,mostofthesecommandswillnotbeveryinteresting.However,wecanexplorewithtwocommandsrightaway.
First,wehavealreadyseenthecluster-infocommandduringinitialization,butwecanrunitagainatanytimewiththefollowing:
$kubectlcluster-info
Anotherusefulcommandisget.Thegetcommandcanbeusedtoseecurrentlyrunningservices,pods,replicationcontrollers,andalotmore.Herearethethreeexamplesthatareusefulrightoutofthegate:
Listingthenodesinourcluster:
$kubectlgetnodes
Listclusterevents:
$kubectlgetevents
Finally,wecanseeanyservicesthatarerunningintheclusterasfollows:
$kubectlgetservices
Tostartwith,wewillonlyseeoneservice,namedkubernetes.ThisserviceisthecoreAPIserver,monitoringandloggingservicesforthepodsandcluster.
ServicesrunningonthemasterLet’sdigalittlebitdeeperintoournewclusteranditscoreservices.Bydefault,machinesarenamedwiththekubernetes-prefix.Wecanmodifythisusing$KUBE_GCE_INSTANCE_PREFIXbeforeaclusterisspunup.Fortheclusterwejuststarted,themastershouldbenamedkubernetes-master.Wecanusethegcloudcommand-lineutilitytoSSHintothemachine.ThefollowingcommandwillstartanSSHsessionwiththemasternode.BesuretosubstituteyourprojectIDandzonetomatchyourenvironment.Also,notethatyoucanlaunchSSHfromtheGoogleCloudconsoleusingthefollowingsyntax:
$gcloudcompute--project"<YourprojectID>"ssh--zone"<yourgcezone>"
"kubernetes-master"
Onceweareloggedin,weshouldgetastandardshellprompt.Let’srunthefamiliarsudodockerpscommand.
Figure1.13.Mastercontainerlisting
EventhoughwehavenotdeployedanyapplicationsonKubernetesyet,wenotethatthereareseveralcontainersalreadyrunning.Thefollowingisabriefdescriptionofeachcontainer:
fluentd-gcp:ThiscontainercollectsandsendstheclusterlogsfiletotheGoogleCloudLoggingservice.kube-ui:ThisistheUIthatwesawearlier.kube-controller-manager:Thecontrollermanagercontrolsavarietyofclusterfunctions.Ensuringaccurateandup-to-datereplicationisoneofitsvitalroles.Additionally,itmonitors,manages,anddiscoversnewnodes.Finally,itmanagesandupdatesserviceendpoints.
kube-apiserver:ThiscontainerrunstheAPIserver.AsweexploredintheSwaggerinterface,thisRESTfulAPIallowsustocreate,query,update,andremovevariouscomponentsofourKubernetescluster.kube-scheduler:Theschedulertakesunscheduledpodsandbindsthemtonodesbasedonthecurrentschedulingalgorithm.etcd:ThisrunstheetcdsoftwarebuiltbyCoreOS.etcdisadistributedandconsistentkey-valuestore.ThisiswheretheKubernetesclusterstateisstored,updated,andretrievedbyvariouscomponentsofK8s.pause:ThePausecontainerisoftenreferredtoasthepodinfrastructurecontainerandisusedtosetupandholdthenetworkingnamespaceandresourcelimitsforeachpod.
NoteFigure2.1inthenextchapterwillalsoshowhowafewoftheseservicesworktogether.
ToexittheSSHsession,simplytypeexitattheprompt.
ServicesrunningontheminionsWecouldSSHtooneoftheminions,butsinceKubernetesschedulesworkloadsacrossthecluster,wewouldnotseeallthecontainersonasingleminion.However,wecanlookatthepodsrunningonalltheminionsusingthekubectlcommand:
$kubectlgetpods
Sincewehavenotstartedanyapplicationsontheclusteryet,wedon’tseeanypods.However,thereareactuallyseveralsystempodsrunningpiecesoftheKubernetesinfrastructure.Wecanseethesepodsbyspecifyingthekube-systemnamespace.Wewillexplorenamespacesandtheirsignificancelater,butfornow,the--namespace=kube-systemcommandcanbeusedtolookattheseK8ssystemresourcesasfollows:
$kubectlgetpods--namespace=kube-system
Weshouldseesomethingsimilartothefollowing:
etcd-server
fluentd-cloud-logging
kube-apiserver
kube-controller-manager
kube-scheduler
kube-ui
kube-dns
monitoring-heapster
monitoring-influx-grafana
Thefirstsixshouldlookfamiliar.Theseareadditionalpiecesoftheserviceswesawrunningonthemaster.Thefinalthreeareserviceswehavenotseenyet.kube-dnsprovidestheDNSandservicediscoveryplumbing.monitoring-heapsteristhesystemusedtomonitorresourceusageacrossthecluster.monitoring-influx-grafanaprovidesthedatabaseanduserinterfacewesawearlierformonitoringtheinfrastructure.
IfwedidSSHintoarandomminion,wewouldseeseveralcontainersthatrunacrossafewofthesepods.Asamplemightlookliketheimagehere:
Figure1.14.Minioncontainerlisting
Again,wesawasimilarlineupofservicesonthemaster.Theserviceswedidnotseeonthemasterincludethefollowing:
skydns:ThisusesDNStoprovideadistributedservicediscoveryutilitythatworkswithetcd.kube2Sky:Thisistheconnectorbetweenskydnsandkubernetes.ServicesintheAPIaremonitoredforchangesandupdatedinskydnsappropriately.heapster:Thisdoesresourceusageandmonitoring.exechealthz:Thisperformshealthchecksonthepods.
TeardownclusterOK,thisisourfirstclusteronGCE,butlet’sexploresomeotherproviders.Tokeepthingssimple,weneedtoremovetheonewejustcreatedonGCE.Wecanteardowntheclusterwithonesimplecommand:
$kube-down.sh
WorkingwithotherprovidersBydefault,KubernetesusestheGCEproviderforGoogleCloud.WecanoverridethisdefaultbysettingtheKUBERNETES_PROVIDERenvironmentvariable.ThefollowingprovidersaresupportedwithvalueslistedinTable1.1:
Provider KUBERNETES_PROVIDERvalue Type
GoogleComputeEngine gce Publiccloud
GoogleContainerEngine gke Publiccloud
AmazonWebServices aws Publiccloud
MicrosoftAzure azure Publiccloud
HashicorpVagrant vagrant Virtualdevelopmentenvironment
VMwarevSphere vsphere Privatecloud/on-premisevirtualization
LibvirtrunningCoreOS libvirt-coreos Virtualizationmanagementtool
CanonicalJuju(folksbehindUbuntu) juju OSserviceorchestrationtool
Table1.1.Kubernetesproviders
Let’strysettinguptheclusteronAWS.Asaprerequisite,weneedtohavetheAWSCommandLineInterface(CLI)installedandconfiguredforouraccount.AWSCLIInstallationandconfigurationdocumentationcanbefoundhere:
Installationdocumentation:http://docs.aws.amazon.com/cli/latest/userguide/installing.html#install-bundle-other-osConfigurationdocumentation:http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Then,itisasimpleenvironmentvariablesettingasfollows:
$exportKUBERNETES_PROVIDER=aws
Again,wecanusethekube-up.shcommandtospinuptheclusterasfollows:
$kube-up.sh
AswithGCE,thesetupactivitywilltakeafewminutes.ItwillstagefilesinS3,createtheappropriateinstances,VirtualPrivateCloud(VPC),securitygroups,andsooninourAWSaccount.Then,theKubernetesclusterwillbesetupandstarted.Onceeverythingisfinishedandstarted,weshouldseetheclustervalidationattheendoftheoutput.
Figure1.15.AWSclustervalidation
Onceagain,wewillSSHintomaster.Thistime,wecanusethenativeSSHclient.We’llfindthekeyfilesin/home/<username>/.ssh:
$ssh-v-i/home/<username>/.ssh/kube_aws_rsaubuntu@<YourmasterIP>
We’llusesudodockerpstoexploretherunningcontainers.Weshouldseesomethinglikethefollowing:
Figure1.16.Mastercontainerlisting(AWS)
Forthemostpart,weseethesamecontainersasourGCEclusterhad.However,insteadoffluentd-gcpservice,weseefluentd-elasticsearch.
OntheAWSprovider,ElasticsearchandKibanaaresetupforus.WecanfindtheKibanaUIbyusingthefollowingsyntaxasURL:https://<yourmasterip>/api/v1/proxy/namespaces/kube-
system/services/kibana-logging/#/discover
Figure1.17.KubernetesKibanadashboard
ResettingtheclusterThatisalittletasteofrunningtheclusteronAWS.Fortheremainderofthebook,IwillbebasingmyexamplesonaGCEcluster.Forthebestexperiencefollowingalong,youcangetbacktoaGCEclustereasily.
SimplyteardowntheAWSclusterasfollows:
$kube-down.sh
Then,createaGCEclusteragainusingfollowing:
$exportKUBERNETES_PROVIDER=gce
$kube-up.sh
SummaryWetookaverybrieflookathowcontainersworkandhowtheylendthemselvestothenewarchitecturepatternsinmicroservices.YoushouldnowhaveabetterunderstandingofhowthesetwoforceswillrequireavarietyofoperationsandmanagementtasksandhowKubernetesoffersstrongfeaturestoaddressthesechallenges.Finally,wecreatedtwodifferentclustersonbothGCEandAWSandexploredthestartupscriptaswellassomeofthebuilt-infeaturesofKubernetes.
Inthenextchapter,wewillexplorethecoreconceptandabstractionsK8sprovidestomanagecontainersandfullapplicationstacks.Wewillalsolookatbasicscheduling,servicediscovery,andhealthchecking.
Footnotes1MalcomMcLeanentryonWikipedia:https://en.wikipedia.org/wiki/Malcom_McLean
2MartinFowleronmicroservices:http://martinfowler.com/articles/microservices.html
3KubernetesGitHubprojectpage:https://github.com/kubernetes/kubernetes
Referenceshttps://en.wikipedia.org/wiki/Continuous_integrationhttps://docs.docker.com/https://github.com/GoogleCloudPlatform/kubernetes/
Chapter2.Kubernetes–CoreConceptsandConstructsThischapterwillcoverthecoreKubernetesconstructs,suchaspods,services,replicationcontrollers,andlabels.Afewsimpleapplicationexampleswillbeincludedtodemonstrateeachconstruct.Thechapterwillalsocoverbasicoperationsforyourcluster.Finally,healthchecksandschedulingwillbeintroducedwithafewexamples.
Thischapterwilldiscussthefollowingtopics:
Kubernetes’overallarchitectureIntroductiontocoreKubernetesconstructs,suchaspods,services,replicationcontrollers,andlabelsUnderstandhowlabelscaneasemanagementofaKubernetesclusterUnderstandhowtomonitorservicesandcontainerhealthUnderstandhowtosetupschedulingconstraintsbasedonavailableclusterresources
ThearchitectureAlthoughDockerbringsahelpfullayerofabstractionandtoolingaroundcontainermanagement,Kubernetesbringssimilarassistancetoorchestratingcontainersatscaleaswellasmanagingfullapplicationstacks.
K8smovesupthestackgivingusconstructstodealwithmanagementattheapplicationorservicelevel.Thisgivesusautomationandtoolingtoensurehighavailability,applicationstack,andservice-wideportability.K8salsoallowsfinercontrolofresourceusage,suchasCPU,memory,anddiskspaceacrossourinfrastructure.
Kubernetesprovidesthishigherleveloforchestrationmanagementbygivinguskeyconstructstocombinemultiplecontainers,endpoints,anddataintofullapplicationstacksandservices.K8sthenprovidesthetoolingtomanagethewhen,where,andhowmanyofthestackanditscomponents.
Figure2.1.Kubernetescorearchitecture
Intheprecedingfigure(Figure2.1),weseethecorearchitectureforKubernetes.Mostadministrativeinteractionsaredoneviathekubectlscriptand/orRESTfulservicecallstotheAPI.
Notetheideasofthedesiredstateandactualstatecarefully.ThisiskeytohowKubernetesmanagestheclusteranditsworkloads.AllthepiecesofK8sareconstantlyworkingtomonitorthecurrentactualstateandsynchronizeitwiththedesiredstatedefinedbytheadministratorsviatheAPIserverorkubectlscript.Therewillbetimes
whenthesestatesdonotmatchup,butthesystemisalwaysworkingtoreconcilethetwo.
MasterEssentially,masteristhebrainofourcluster.Here,wehavethecoreAPIserver,whichmaintainsRESTfulwebservicesforqueryinganddefiningourdesiredclusterandworkloadstate.It’simportanttonotethatthecontrolpaneonlyaccessesthemastertoinitiatechangesandnotthenodesdirectly.
Additionally,themasterincludesthescheduler,whichworkswiththeAPIservertoscheduleworkloadsintheformofpodsontheactualminionnodes.Thesepodsincludethevariouscontainersthatmakeupourapplicationstacks.Bydefault,thebasicKubernetesschedulerspreadspodsacrosstheclusterandusesdifferentnodesformatchingpodreplicas.Kubernetesalsoallowsspecifyingnecessaryresourcesforeachcontainer,soschedulingcanbealteredbytheseadditionalfactors.
ThereplicationcontrollerworkswiththeAPIservertoensurethatthecorrectnumberofpodreplicasarerunningatanygiventime.Thisisexemplaryofthedesiredstateconcept.Ifourreplicationcontrollerisdefiningthreereplicasandouractualstateistwocopiesofthepodrunning,thentheschedulerwillbeinvokedtoaddathirdpodsomewhereonourcluster.Thesameistrueiftherearetoomanypodsrunningintheclusteratanygiventime.Inthisway,K8sisalwayspushingtowardsthatdesiredstate.
Finally,wehaveetcdrunningasadistributedconfigurationstore.TheKubernetesstateisstoredhereandetcdallowsvaluestobewatchedforchanges.Thinkofthisasthebrain’ssharedmemory.
Node(formerlyminions)Ineachnode,wehaveacoupleofcomponents.ThekubletinteractswiththeAPIservertoupdatestateandtostartnewworkloadsthathavebeeninvokedbythescheduler.
Kube-proxyprovidesbasicloadbalancinganddirectstrafficdestinedforspecificservicestotheproperpodonthebackend.SeetheServicessectionlaterinthischapter.
Finally,wehavesomedefaultpods,whichrunvariousinfrastructureservicesforthenode.Asweexploredbrieflyinthepreviouschapter,thepodsincludeservicesforDomainNameSystem(DNS),logging,andpodhealthchecks.Thedefaultpodwillrunalongsideourscheduledpodsoneverynode.
NoteNotethatinv1.0,minionwasrenamedtonode,buttherearestillremnantsofthetermminioninsomeofthemachinenamingscriptsanddocumentationthatexistsontheWeb.Forclarity,I’veaddedthetermminioninadditiontonodeinafewplacesthroughoutthebook.
CoreconstructsNow,let’sdivealittledeeperandexploresomeofthecoreabstractionsKubernetesprovides.Theseabstractionswillmakeiteasiertothinkaboutourapplicationsandeasetheburdenoflifecyclemanagement,highavailability,andscheduling.
PodsPodsallowyoutokeeprelatedcontainerscloseintermsofthenetworkandhardwareinfrastructure.Datacanliveneartheapplication,soprocessingcanbedonewithoutincurringahighlatencyfromnetworktraversal.Similarly,commondatacanbestoredonvolumesthataresharedbetweenanumberofcontainers.Podsessentiallyallowyoutologicallygroupcontainersandpiecesofourapplicationstackstogether.
Whilepodsmayrunoneormorecontainersinside,thepoditselfmaybeoneofmanythatisrunningonaKubernetes(minion)node.Aswe’llsee,podsgiveusalogicalgroupofcontainersthatwecanthenreplicate,schedule,andbalanceserviceendpointsacross.
PodexampleLet’stakeaquicklookatapodinaction.WewillspinupaNode.jsapplicationonthecluster.You’llneedaGCEclusterrunningforthis,soseeChapter1,KubernetesandContainerOperations,undertheOurfirstclustersection,ifyoudon’talreadyhaveonestarted.
Now,let’smakeadirectoryforourdefinitions.Inthisexample,Iwillcreateafolderinthe/book-examplessubfolderunderourhomedirectory.
$mkdirbook-examples
$cdbook-examples
$mkdir02_example
$cd02_example
TipDownloadingtheexamplecode
Youcandownloadtheexamplecodefilesfromyouraccountathttp://www.packtpub.comforallthePacktPublishingbooksyouhavepurchased.Ifyoupurchasedthisbookelsewhere,youcanvisithttp://www.packtpub.com/supportandregistertohavethefilese-maileddirectlytoyou.
Useyourfavoriteeditortocreatethefollowingfile:
apiVersion:v1
kind:Pod
metadata:
name:node-js-pod
spec:
containers:
-name:node-js-pod
image:bitnami/apache:latest
ports:
-containerPort:80
Listing2-1:nodejs-pod.yaml
Thisfilecreatesapodnamenode-js-podwiththelatestbitnami/apachecontainerrunningonport80.Wecancheckthisusingthefollowingcommand:
$kubectlcreate-fnodejs-pod.yaml
Theoutputisasfollows:
pods/node-js-pod
Thisgivesusapodrunningthespecifiedcontainer.Wecanseemoreinformationonthepodbyrunningthefollowingcommand:
$kubectldescribepods/node-js-pod
You’llseeagooddealofinformation,suchasthepod’sstatus,IPaddress,andevenrelevantlogevents.You’llnotethepodIPaddressisaprivate10.x.x.xaddress,sowecannotaccessitdirectlyfromourlocalmachine.NottoworryasthekubectlexeccommandmirrorsDocker’sexecfunctionality.Usingthisfeature,wecanrunacommandinsideapod:
$kubectlexecnode-js-pod—curl<privateipaddress>
TipBydefault,thisrunsacommandinthefirstcontaineritfinds,butyoucanselectaspecificoneusingthe-cargument.
Afterrunning,thecommandyoushouldseesomeHTMLcode.We’llhaveaprettierviewlaterinthechapter,butfornow,wecanseethatourpodisindeedrunningasexpected.
LabelsLabelsgiveusanotherlevelofcategorization,whichbecomesveryhelpfulintermsofeverydayoperationsandmanagement.Similartotags,labelscanbeusedasthebasisofservicediscoveryaswellasausefulgroupingtoolforday-to-dayoperationsandmanagementtasks.
Labelsarejustsimplekey-valuepairs.Youwillseethemonpods,replicationcontrollers,services,andsoon.ThelabelactsasaselectorandtellsKuberneteswhichresourcestoworkwithforavarietyofoperations.Thinkofitasafilteringoption.
Wewilltakealookatlabelsmoreindepthlaterinthischapter,butfirst,wewillexploretheremainingtwoconstructs,services,andreplicationcontrollers.
Thecontainer’safterlifeAsanyoneinoperationscanattest,failureshappenallthetime.Containersandpodscanandwillcrash,becomecorrupted,ormaybeevenjustgetaccidentallyshutoffbyaclumsyadminpokingaroundononeofthenodes.Strongpolicyandsecuritypracticeslikeenforcingleastprivilegecurtailsomeoftheseincidents,but“involuntaryworkloadslaughterhappens”andissimplyafactofoperations.
Luckily,Kubernetesprovidestwoveryvaluableconstructstokeepthissomberaffairalltidiedupbehindthecurtains.Servicesandreplicationcontrollersgiveustheabilitytokeepourapplicationsrunningwithlittleinterruptionandgracefulrecovery.
ServicesServicesallowustoabstractaccessawayfromtheconsumersofourapplications.Usingareliableendpoint,usersandotherprogramscanaccesspodsrunningonyourclusterseamlessly.
K8sachievesthisbymakingsurethateverynodeintheclusterrunsaproxynamedkube-proxy.Asthenamesuggests,kube-proxy’sjobistoproxycommunicationfromaserviceendpointbacktothecorrespondingpodthatisrunningtheactualapplication.
Figure2.2.Thekube-proxyarchitecture
Membershipintheserviceloadbalancingpoolisdeterminedbytheuseofselectorsandlabels.Podswithmatchinglabelsareaddedtothelistofcandidateswheretheserviceforwardstraffic.AvirtualIPaddressandportareusedastheentrypointfortheservice,andtrafficisthenforwardedtoarandompodonatargetportdefinedbyeitherK8soryourdefinitionfile.
UpdatestoservicedefinitionsaremonitoredandcoordinatedfromtheK8sclustermasterandpropagatedtothekube-proxydaemonsrunningoneachnode.
Tip
Atthemoment,kube-proxyisrunningonthenodehostitself.Thereareplanstocontainerizethisandthekubeletbydefaultinthefuture.
ReplicationcontrollersReplicationcontrollers(RCs),asthenamesuggests,managethenumberofnodesthatapodandincludedcontainerimagesrunon.Theyensurethataninstanceofanimageisbeingrunwiththespecificnumberofcopies.
Asyoustarttooperationalizeyourcontainersandpods,you’llneedawaytorolloutupdates,scalethenumberofcopiesrunning(bothupanddown),orsimplyensurethatatleastoneinstanceofyourstackisalwaysrunning.RCscreateahigh-levelmechanismtomakesurethatthingsareoperatingcorrectlyacrosstheentireapplicationandcluster.
RCsaresimplychargedwithensuringthatyouhavethedesiredscaleforyourapplication.Youdefinethenumberofpodreplicasyouwantrunningandgiveitatemplateforhowtocreatenewpods.Justlikeservices,wewilluseselectorsandlabelstodefineapod’smembershipinareplicationcontroller.
TipKubernetesdoesn’trequirethestrictbehaviorofthereplicationcontroller.Infact,version1.1hasajobcontrollerinbetathatcanbeusedforshortlivedworkloadswhichallowjobstoberuntoacompletionstate.
OurfirstKubernetesapplicationBeforewemoveon,let’stakealookatthesethreeconceptsinaction.Kubernetesshipswithanumberofexamplesinstalled,butwewillcreateanewexamplefromscratchtoillustratesomeoftheconcepts.
We’vealreadycreatedapoddefinitionfile,butaswelearned,therearemanyadvantagestorunningourpodsviareplicationcontrollers.Again,usingthebook-examples/02_examplefolderwemadeearlier,wewillcreatesomedefinitionfilesandstartaclusterofNode.jsserversusingareplicationcontrollerapproach.Additionally,we’lladdapublicfacetoitwithaload-balancedservice.
Useyourfavoriteeditortocreatethefollowingfile:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js
labels:
name:node-js
deployment:demo
spec:
replicas:3
selector:
name:node-js
deployment:demo
template:
metadata:
labels:
name:node-js
spec:
containers:
-name:node-js
image:jonbaier/node-express-info:latest
ports:
-containerPort:80
Listing2-2:nodejs-controller.yaml
Thisisthefirstresourcedefinitionfileforourcluster,solet’stakeacloserlook.You’llnotethatithasfourfirst-levelelements(kind,apiVersion,metadata,andspec).Thesearecommonamongalltop-levelKubernetesresourcedefinitions:
KindtellsK8swhattypeofresourcewearecreating.Inthiscase,thetypeisReplicationController.Thekubectlscriptusesasinglecreatecommandforalltypesofresources.Thebenefithereisthatyoucaneasilycreateanumberofresourcesofvarioustypeswithoutneedingtospecifyindividualparametersforeachtype.However,itrequiresthatthedefinitionfilescanidentifywhatitistheyarespecifying.ApiVersionsimplytellsKuberneteswhichversionoftheschemaweareusing.Allexamplesinthisbookwillbeonv1.Metadataiswherewewillgivetheresourceanameandalsospecifylabelsthatwill
beusedtosearchandselectresourcesforagivenoperation.Themetadataelementalsoallowsyoutocreateannotations,whicharefornonidentifyinginformationthatmightbeusefulforclienttoolsandlibraries.Finally,wehavespec,whichwillvarybasedonthekindortypeofresourcewearecreating.Inthiscase,it’sReplicationController,whichensuresthedesirednumberofpodsarerunning.Thereplicaselementdefinesthedesirednumberofpods,theselectortellsthecontrollerwhichpodstowatch,andfinally,thetemplateelementdefinesatemplatetolaunchanewpod.Thetemplatesectioncontainsthesamepieceswesawinourpoddefinitionearlier.Animportantthingtonoteisthattheselectorvaluesneedtomatchthelabelsvaluesspecifiedinthepodtemplate.Rememberthatthismatchingisusedtoselectthepodsbeingmanaged.
Now,let’stakealookattheservicedefinition:
apiVersion:v1
kind:Service
metadata:
name:node-js
labels:
name:node-js
spec:
type:LoadBalancer
ports:
-port:80
selector:
name:node-js
Listing2-3:nodejs-rc-service.yaml
TheYAMLhereissimilartotheReplicationController.Themaindifferenceisseenintheservicespecelement.Here,wedefinetheServicetype,listeningport,andselector,whichtellstheServiceproxywhichpodscananswertheservice.
TipKubernetessupportsbothYAMLandJSONformatsfordefinitionfiles.
CreatetheNode.jsexpressreplicationcontroller:
$kubectlcreate-fnodejs-controller.yaml
Theoutputisasfollows:
replicationcontrollers/node-js
Thisgivesusareplicationcontrollerthatensuresthatthreecopiesofthecontainerarealwaysrunning:
$kubectlcreate-fnodejs-rc-service.yaml
Theoutputisasfollows:
services/node-js
OnGCE,thiswillcreateanexternalloadbalancerandforwardingrules,butyoumayneed
toaddadditionalfirewallrules.Inmycase,thefirewallwasalreadyopenforport80.However,youmayneedtoopenthisport,especiallyifyoudeployaservicewithportsotherthan80and443.
OK,nowwehavearunningservice,whichmeansthatwecanaccesstheNode.jsserversfromareliableURL.Let’stakealookatourrunningservices:
$kubectlgetservices
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.3.Serviceslisting
Intheprecedingfigure(Figure2.3),youshouldnotethatthenode-jsservicerunningand,intheIP(S)column,youshouldhavebothaprivateandapublic(130.211.186.84inthescreenshot)IPaddress.Let’sseeifwecanconnectbyopeningupthepublicaddressinabrowser:
Figure2.4.Containerinfoapplication
YoushouldseesomethinglikeFigure2.4.Ifwevisitmultipletimes,youshouldnotethatthecontainernamechanges.Essentially,theserviceloadbalancerisrotatingbetweenavailablepodsonthebackend.
NoteBrowsersusuallycachewebpages,sotoreallyseethecontainernamechangeyoumayneedtoclearyourcacheoruseaproxylikethisone:
https://hide.me/en/proxy
Let’stryplayingchaosmonkeyabitandkilloffafewcontainerstoseewhatKubernetesdoes.Inordertodothis,weneedtoseewherethepodsareactuallyrunning.First,let’slistourpods:
$kubectlgetpods
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.5.Currentlyrunningpods
Now,let’sgetsomemoredetailsononeofthepodsrunninganode-jscontainer.Youcandothiswiththedescribecommandwithoneofthepodnameslistedinthelastcommand:
$kubectldescribepod/node-js-sjc03
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.6.Poddescription
Youshouldseetheprecedingoutput.TheinformationweneedistheNode:section.Let’susethenodenametoSSH(shortforSecureShell)intothe(minion)noderunningthisworkload:
$gcloudcompute--project"<YourprojectID>"ssh--zone"<yourgcezone>"
"<Nodefrompoddescribe>"
OnceSSHedintothenode,ifwerunasudodockerpscommand,weshouldseeatleasttwocontainers:onerunningthepauseimageandonerunningtheactualnode-express-
infoimage.YoumayseemoreiftheK8sscheduledmorethanonereplicaonthisnode.Let’sgrabthecontainerIDofthejonbaier/node-express-infoimage(notgcr.io/google_containers/pause)andkillitofftoseewhathappens.SavethiscontainerIDsomewhereforlater:
$sudodockerps--filter="name=node-js"
$sudodockerstop<node-expresscontainerid>
$sudodockerrm<containerid>
$sudodockerps--filter="name=node-js"
Unlessyouarereallyquickyou’llprobablynotethatthereisstillanode-express-infocontainerrunning,butlookcloselyandyou’llnotethatthecontaineridisdifferentandthecreationtimestampshowsonlyafewsecondsago.IfyougobacktotheserviceURL,itisfunctioninglikenormal.GoaheadandexittheSSHsessionfornow.
Here,wearealreadyseeingKubernetesplayingtheroleofon-calloperationsensuringthatourapplicationisalwaysrunning.
Let’sseeifwecanfindanyevidenceoftheoutage.GototheEventspageintheKubernetesUI.YoucanfinditonthemainK8sdashboardunderEventsintheViewsmenu.Alternatively,youcanjustusethefollowingURL,addingyourmasterip:https://<yourmasterip>/api/v1/proxy/namespaces/kube-system/services/kube-
ui/#/dashboard/events
Youwillseeascreensimilartothefollowingscreenshot:
Figure2.7.KubernetesUIeventpage
Youshouldseethreerecentevents.First,Kubernetespullstheimage.Second,itcreatesanewcontainerwiththepulledimage.Finally,itstartsthatcontaineragain.You’llnotethat,fromthetimestamps,thisallhappensinlessthanasecond.Timetakenmayvarybasedonclustersizeandimagepulls,buttherecoveryisveryquick.
MoreonlabelsAsmentionedpreviously,labelsarejustsimplekey-valuepairs.Theyareavailableonpods,replicationcontrollers,services,andmore.IfyourecallourserviceYAML,inListing2-3:nodejs-rc-service.yaml,therewasaselectorattribute.TheselectortellsKuberneteswhichlabelstouseinfindingpodstoforwardtrafficforthatservice.
K8sallowsuserstoworkwithlabelsdirectlyonreplicationcontrollersandservices.Let’smodifyourreplicasandservicestoincludeafewmorelabels.Onceagain,useyourfavoriteeditorandcreatethesetwofilesasfollows:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js-labels
labels:
name:node-js-labels
app:node-js-express
deployment:test
spec:
replicas:3
selector:
name:node-js-labels
app:node-js-express
deployment:test
template:
metadata:
labels:
name:node-js-labels
app:node-js-express
deployment:test
spec:
containers:
-name:node-js-labels
image:jonbaier/node-express-info:latest
ports:
-containerPort:80
Listing2-4:nodejs-labels-controller.yaml
apiVersion:v1
kind:Service
metadata:
name:node-js-labels
labels:
name:node-js-labels
app:node-js-express
deployment:test
spec:
type:LoadBalancer
ports:
-port:80
selector:
name:node-js-labels
app:node-js-express
deployment:test
Listing2-5:nodejs-labels-service.yaml
Createthereplicationcontrollerandserviceasfollows:
$kubectlcreate-fnodejs-labels-controller.yaml
$kubectlcreate-fnodejs-labels-service.yaml
Let’stakealookathowwecanuselabelsineverydaymanagement.Thefollowingtableshowsustheoptionstoselectlabels:
Operators Description Example
=or==Youcanuseeitherstyletoselectkeyswithvaluesequaltothestringontheright
name=apache
!= Selectkeyswithvaluesthatdonotequalthestringontheright Environment!=test
In Selectresourceswhoselabelshavekeyswithvaluesinthisset tierin(web,app)
Notin Selectresourceswhoselabelshavekeyswithvaluesnotinthisset tiernotin(lb,app)
<Key
name>Useakeynameonlytoselectresourceswhoselabelscontainthiskey tier
Table1:Labelselectors
Let’strylookingforreplicaswithtestdeployments:
$kubectlgetrc-ldeployment=test
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.8.Replicationcontrollerlisting
You’llnoticethatitonlyreturnsthereplicationcontrollerwejuststarted.Howaboutserviceswithalabelnamedcomponent?Usethefollowingcommand:
$kubectlgetservices-lcomponent
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.9.Listingofserviceswithalabelnamed“component”
Here,weseethecoreKubernetesserviceonly.Finally,let’sjustgetthenode-jsserverswestartedinthischapter.Seethefollowingcommand:
$kubectlgetservices-l"namein(node-js,node-js-labels)"
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.10.Listingofserviceswithalabelnameandavalueof“node-js”or“nodejs-labels”
Additionally,wecanperformmanagementtasksacrossanumberofpodsandservices.Forexample,wecankillallreplicationcontrollersthatarepartofthedemodeployment(ifwehadanyrunning)asfollows:
$kubectldeleterc-ldeployment=demo
Otherwise,killallservicesthatarenotpartofaproductionortestdeployment(again,ifwehadanyrunning),asfollows:
$kubectldeleteservice-l"deploymentnotin(test,production)"
It’simportanttonotethatwhilelabelselectionisquitehelpfulinday-to-daymanagementtasksitdoesrequireproperdeploymenthygieneonourpart.WeneedtomakesurethatwehaveataggingstandardandthatitisactivelyfollowedintheresourcedefinitionfilesforeverythingwerunonKubernetes.
TipWhileweusedservicedefinitionYAMLfilestocreateourservicesthusfar,youcanactuallycreatethemusingakubectlcommandonly.Totrythisout,firstrunthegetpodscommandandgetoneofthenode-jspodnames.Next,usethefollowingexposecommandtocreateaserviceendpointforjustthatpod:
$kubectlexposepods/node-js-gxkix--port=80--name=testing-vip--create-
external-load-balancer=true
Thiswillcreateaservicenamedtesting-vipandalsoapublicvip(loadbalancerIP)thatcanbeusedtoaccessthispodoverport80.There’sanumberofotheroptionalparametersthatcanbeused.Thesecanbefoundwiththefollowing:
kubectlexpose--help
HealthchecksKubernetesprovidestwolayersofhealthchecking.First,intheformofHTTPorTCPchecks,K8scanattempttoconnecttoaparticularendpointandgiveastatusofhealthyonasuccessfulconnection.Second,application-specifichealthcheckscanbeperformedusingcommandlinescripts.
Let’stakealookatafewhealthchecksinaction.First,we’llcreateanewcontrollerwithahealthcheck:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js
labels:
name:node-js
spec:
replicas:3
selector:
name:node-js
template:
metadata:
labels:
name:node-js
spec:
containers:
-name:node-js
image:jonbaier/node-express-info:latest
ports:
-containerPort:80
livenessProbe:
#AnHTTPhealthcheck
httpGet:
path:/status/
port:80
initialDelaySeconds:30
timeoutSeconds:1
Listing2-6:nodejs-health-controller.yaml
Notetheadditionofthelivenessprobeelement.Thisisourcorehealthcheckelement.Fromthere,wecanspecifyhttpGet,tcpScoket,orexec.Inthisexample,weusehttpGettoperformasimplecheckforaURIonourcontainer.Theprobewillcheckthepathandportspecifiedandrestartthepodifitdoesn’tsuccessfullyreturn.
TipStatuscodesbetween200and399areallconsideredhealthybytheprobe.
Finally,initialDelaySecondsgivesustheflexibilitytodelayhealthchecksuntilthepodhasfinishedinitializing.timeoutSecondsissimplythetimeoutvaluefortheprobe.
Let’suseournewhealthcheck-enabledcontrollertoreplacetheoldnode-jsRC.Wecandothisusingthereplacecommand,whichwillreplacethereplicationcontroller
definition:
$kubectlreplace-fnodejs-health-controller.yaml
ReplacingtheRConit’sownwon’treplaceourcontainersbecauseitstillhasthreehealthypodsfromourfirstrun.Let’skilloffthosepodsandlettheupdatedReplicationControllerreplacethemwithcontainersthathavehealthchecks.
$kubectldeletepods-lname=node-js
Now,afterwaitingaminuteortwo,wecanlistthepodsinanRCandgraboneofthepodIDstoinspectabitdeeperwiththedescribecommand:
$kubectldescriberc/node-js
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.11.Descriptionof“node-js”replicationcontroller
Then,usingthefollowingcommandforoneofthepods:
$kubectldescribepods/node-js-1m3cs
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.12.Descriptionof“node-js-1m3cs”pod
Dependingonyourtiming,youwilllikelyhaveanumberofeventsforthepod.Withinaminuteortwo,you’llnoteapatternofkilling,started,andcreatedeventsrepeatingoverandoveragain.YoushouldalsoseeanunhealthyeventdescribedasLivenessprobefailed:CannotGET/status/.Thisisourhealthcheckfailingbecausewedon’thaveapagerespondingat/status.
Youmaynotethatifyouopenabrowsertotheserviceloadbalanceraddress,itstillrespondswithapage.YoucanfindtheloadbalancerIPwithakubectlgetservicescommand.
Thisishappeningforanumberofreasons.First,thehealthcheckissimplyfailingbecause/statusdoesn’texist,butthepagewheretheserviceispointedisstillfunctioningnormally.Second,thelivenessProbeisonlychargedwithrestartingthecontaineronahealthcheckfail.ThereisaseparatereadinessProbethatwillremoveacontainerfromthepoolofpodsansweringserviceendpoints.
Let’smodifythehealthcheckforapagethatdoesexistinourcontainer,sowehaveaproperhealthcheck.We’llalsoaddareadinesscheckandpointittothenonexistentstatuspage.Openthenodejs-health-controller.yamlfileandmodifythespecsectiontomatchListing2-7andsaveitasnodejs-health-controller-2.yaml.
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js
labels:
name:node-js
spec:
replicas:3
selector:
name:node-js
template:
metadata:
labels:
name:node-js
spec:
containers:
-name:node-js
image:jonbaier/node-express-info:latest
ports:
-containerPort:80
livenessProbe:
#AnHTTPhealthcheck
httpGet:
path:/status/
port:80
initialDelaySeconds:30
timeoutSeconds:1
readinessProbe:
#AnHTTPhealthcheck
httpGet:
path:/status/
port:80
initialDelaySeconds:30
timeoutSeconds:1
Listing2-7:nodejs-health-controller-2.yaml
Thistime,wewilldeletetheoldRC,whichwillkillthepodswithit,andcreateanewRCwithourupdatedYAMLfile.
$kubectldeleterc-lname=node-js
$kubectlcreate-fnodejs-health-controller-2.yaml
Nowwhenwedescribeoneofthepods,weonlyseethecreationofthepodandthecontainer.However,you‘llnotethattheserviceloadbalancerIPnolongerworks.Ifwerunthedescribecommandononeofthenewnodeswe’llnoteaReadinessprobefailederrormessage,butthepoditselfcontinuesrunning.Ifwechangethereadinessprobepathtopath:/,wewillagainbeabletofulfillrequestsfromthemainservice.Openupnodejs-health-controller-2.yamlinaneditorandmakethatupdatenow.Then,onceagainremoveandrecreatethereplicationcontroller:
$kubectldeleterc-lname=node-js
$kubectlcreate-fnodejs-health-controller-2.yaml
NowtheloadbalancerIPshouldworkonceagain.KeepthesepodsaroundaswewillusethemagaininChapter3,CoreConcepts–Networking,Storage,andAdvancedServices.
TCPchecksKubernetesalsosupportshealthchecksviasimpleTCPsocketchecksandalsowithcustomcommand-linescripts.ThefollowingsnippetsareexamplesofwhatbothusecaseslooklikeintheYAMLfile:
livenessProbe:
exec:
command:
-/usr/bin/health/checkHttpServce.sh
initialDelaySeconds:90
timeoutSeconds:1
Listing2-8:Healthcheckusingcommand-linescript
livenessProbe:
tcpSocket:
port:80
initialDelaySeconds:15
timeoutSeconds:1
Listing2-9:HealthcheckusingsimpleTCPSocketconnection
LifecyclehooksorgracefulshutdownAsyourunintofailuresinreal-lifescenarios,youmayfindthatyouwanttotakeadditionalactionbeforecontainersareshutdownorrightaftertheyarestarted.Kubernetesactuallyprovideslifecyclehooksforjustthiskindofusecase.
ThefollowingexamplecontrollerdefinitiondefinesbothapostStartandapreStopactiontotakeplacebeforeKubernetesmovesthecontainerintothenextstageofitslifecycle1:
apiVersion:v1
kind:ReplicationController
metadata:
name:apache-hook
labels:
name:apache-hook
spec:
replicas:3
selector:
name:apache-hook
template:
metadata:
labels:
name:apache-hook
spec:
containers:
-name:apache-hook
image:bitnami/apache:latest
ports:
-containerPort:80
lifecycle:
postStart:
httpGet:
path:http://my.registration-server.com/register/
port:80
preStop:
exec:
command:["/usr/local/bin/apachectl","-k","graceful-stop"]
Listing2-10:apache-hooks-controller.yaml
You’llnoteforthepostStarthookwedefineanhttpGetaction,butforthepreStophook,Idefineanexecaction.Justaswithourhealthchecks,thehttpGetactionattemptstomakeanHTTPcalltothespecificendpointandportcombinationwhiletheexecactionrunsalocalcommandinthecontainer.
ThehttpGetandexecactionarebothsupportedforthepostStartandpreStophooks.InthecaseofpreStop,aparameternamedreasonwillbesenttothehandlerasaparameter.Seethefollowingtable(Table2.1)forvalidvalues:
Reasonparameter FailureDescription
Delete DeletecommandissuedviakubectlortheAPI
Health Healthcheckfails
Dependency Dependencyfailuresuchasadiskmountfailureoradefaultinfrastructurepodcrash
Table2.1.ValidpreStopreasons1
It’simportanttonotethathookcallsaredeliveredatleastonce.Therefore,anylogicintheactionshouldgracefullyhandlesmultiplecalls.AnotherimportantnoteisthatpostStartrunsbeforeapodentersitsreadystate.Ifthehookitselffails,thepodwillbeconsideredunhealthy.
ApplicationschedulingNowthatweunderstandhowtoruncontainersinpodsandevenrecoverfromfailure,itmaybeusefultounderstandhownewcontainersarescheduledonourclusternodes.
Asmentionedearlier,thedefaultbehaviorfortheKubernetesscheduleristospreadcontainerreplicasacrossthenodesinourcluster.Intheabsenceofallotherconstraints,theschedulerwillplacenewpodsonnodeswiththeleastnumberofotherpodsbelongingtomatchingservicesorreplicationcontrollers.
Additionally,theschedulerprovidestheabilitytoaddconstraintsbasedonresourcesavailabletothenode.Today,thatincludesminimumCPUandmemoryallocations.IntermsofDocker,theseusethecpu-sharesandmemorylimitflagsunderthecovers.
Whenadditionalconstraintsaredefined,Kuberneteswillcheckanodeforavailableresources.Ifanodedoesnotmeetalltheconstraints,itwillmovetothenext.Ifnonodescanbefoundthatmeetthecriteria,thenwewillseeaschedulingerrorinthelogs.
TheKubernetesroadmapalsohasplanstosupportnetworkingandstorage.Becauseschedulingissuchanimportantpieceofoveralloperationsandmanagementforcontainers,weshouldexpecttoseemanyadditionsinthisareaastheprojectgrows.
SchedulingexampleLet’stakealookataquickexampleofsettingsomeresourcelimits.IfwelookatourK8sdashboard,wecangetaquicksnapshotofthecurrentstateofresourceusageonourclusterusinghttps://<yourmasterip>/api/v1/proxy/namespaces/kube-system/services/kube-ui,asshowninthefollowingscreenshot:
Figure2.13.KubeUIdashboard
Inthiscase,wehavefairlylowCPUutilization,butadecentchunkofmemoryinuse.
Let’sseewhathappenswhenItrytospinupafewmorepods,butthistime,wewillrequest512Miformemoryand1500mfortheCPU.We’lluse1500mtospecify1.5CPUs,sinceeachnodeonlyhas1CPU,thisshouldresultinfailure.Here’sanexampleofRCdefinition:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js-constraints
labels:
name:node-js-constraints
spec:
replicas:3
selector:
name:node-js-constraints
template:
metadata:
labels:
name:node-js-constraints
spec:
containers:
-name:node-js-constraints
image:jonbaier/node-express-info:latest
ports:
-containerPort:80
resources:
limits:
memory:"512Mi"
cpu:"1500m"
Listing2-11:nodejs-constraints-controller.yaml
Toopentheprecedingfile,usethefollowingcommand:
$kubectlcreate-fnodejs-constraints-controller.yaml
Thereplicationcontrollercompletessuccessfully,butifwerunagetpodscommand,we’llnotethenode-js-constraintspodsarestuckinapendingstate.Ifwelookalittlecloserwiththedescribepods/<pod-id>command,we’llnoteaschedulingerror:
$kubectlgetpods
$kubectldescribepods/<pod-id>
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure2.14.Poddescription
NotethatthefailedSchedulingerrorlistedineventsisaccompaniedbyFailedforreasonPodFitsResourcesandpossiblyothersonourscreen.Asyoucansee,Kubernetescouldnotfindafitintheclusterthatmetalltheconstraintswedefined.
IfwenowmodifyourCPUconstraintdownto500m,andthenrecreateourreplicationcontroller,weshouldhaveallthreepodsrunningwithinafewmoments.
SummaryWe’vetakenalookattheoverallarchitectureforKubernetesaswellasthecoreconstructsprovidedtobuildyourservicesandapplicationstacks.Youshouldhaveabetterunderstandingofhowtheseabstractionsmakeiteasiertomanagethelifecycleofyourstackand/orservicesasawholeandnotjusttheindividualcomponents.Additionally,wetookafirst-handlookathowtomanagesomesimpleday-to-daytasksusingpods,services,andreplicationcontrollers.WealsolookedathowtouseKubernetestoautomaticallyrespondtooutagesviahealthchecks.Finally,weexploredtheKubernetesschedulerandsomeoftheconstraintsuserscanspecifytoinfluenceschedulingplacement.
Footnotes1https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/docs/user-guide/container-environment.md#container-hooks
Chapter3.CoreConcepts–Networking,Storage,andAdvancedServicesInthischapter,wewillbecoveringhowtheKubernetesclusterhandlesnetworkingandhowitdiffersfromotherapproaches.WewillbedescribingthethreerequirementsforKubernetesnetworkingsolutionsandexploringwhythesearekeytoeaseofoperations.Further,wewilltakeadeeperdiveintoservicesandhowtheKubernetesproxyworksoneachnode.Towardstheend,wewilltakealookatstorageconcernsandhowwecanpersistdataacrosspodsandthecontainerlifecycle.Finishingup,wewillseeabriefoverviewofsomehigherlevelisolationfeaturesformultitenancy.
Thischapterwilldiscussthefollowing:
KubernetesnetworkingAdvancedservicesconceptsServicediscoveryDNSPersistentstorageNamespacelimitsandquotas
KubernetesnetworkingNetworkingisavitalconcernforproduction-leveloperations.Ataservicelevel,weneedareliablewayforourapplicationcomponentstofindandcommunicatewitheachother.Introducecontainersandclusteringintothemixandthingsgetmorecomplexaswenowhavemultiplenetworkingnamespacestobearinmind.CommunicationanddiscoverynowbecomesafeatthatmusttraversecontainerIPspace,hostnetworking,andsometimesevenmultipledatacenternetworktopologies.
KubernetesbenefitsherefromgettingitsancestryfromtheclusteringtoolsusedbyGoogleforthepastdecade.NetworkingisoneareawhereGooglehasoutpacedthecompetitionwithoneofthelargestnetworksontheplanet.Earlyon,GooglebuiltitsownhardwareswitchesandSoftware-definedNetworking(SDN)togivethemmorecontrol,redundancy,andefficiencyintheirday-to-daynetworkoperations1.ManyofthelessonslearnedfromrunningandnetworkingtwobillioncontainersperweekhavebeendistilledintoKubernetesandinformedhowK8snetworkingisdone.
NetworkinginKubernetesrequiresthateachpodhaveitsownIPaddress.Implementationdetailsmayvarybasedontheunderlyinginfrastructureprovider.However,allimplementationsmustadheretosomebasicrules.Firstandsecond,KubernetesdoesnotallowtheuseofNetworkAddressTranslation(NAT)forcontainer-to-containerorforcontainer-to-node(minion)traffic.Further,theinternalcontainerIPaddressmustmatchtheIPaddressthatisusedtocommunicatewithit.
Theseruleskeepmuchofthecomplexityoutofournetworkingstackandeasethedesignoftheapplications.Further,iteliminatestheneedtoredesignnetworkcommunicationinlegacyapplicationsthataremigratedfromexistinginfrastructure.Finally,ingreenfieldapplications,itallowsforgreaterscaleinhandlinghundreds,oreventhousands,ofservicesandapplicationcommunication.
K8sachievesthispod-wideIPmagicbyusingaplaceholder.RememberthatpausecontainerwesawinChapter1,KubernetesandContainerOperations,undertheServicesrunningonthemastersection.Thatisoftenreferredtoasapodinfrastructurecontainer,andithastheimportantjobofreservingthenetworkresourcesforourapplicationcontainersthatwillbestartedlateron.Essentially,thepausecontainerholdsthenetworkingnamespaceandIPaddressfortheentirepodandcanbeusedbyallthecontainersrunningwithin.
NetworkingcomparisonsIngettingabetterunderstandingofnetworkingincontainers,itcanbeinstructivetolookatotherapproachestocontainernetworking.
DockerTheDockerEnginebydefaultusesabridgednetworkingmode.Inthismode,thecontainerhasitsownnetworkingnamespaceandisthenbridgedviavirtualinterfacestothehost(ornodeinthecaseofK8s)network.
Inthebridgedmode,twocontainerscanusethesameIPrangebecausetheyarecompletelyisolated.Therefore,servicecommunicationrequiressomeadditionalportmappingthroughthehostsideofnetworkinterfaces.
Dockeralsosupportsahostmode,whichallowsthecontainerstousethehostnetworkstack.Performanceisgreatlybenefitedsinceitremovesalevelofnetworkvirtualization;however,youlosethesecurityofhavinganisolatednetworknamespace.
Finally,Dockersupportsacontainermode,whichsharesanetworknamespacebetweentwocontainers.ThecontainerswillsharethenamespaceandIPaddress,socontainerscannotusethesameports.
Inallthesescenarios,wearestillonasinglemachine,andoutsideofahostmode,thecontainerIPspaceisnotavailableoutsidethatmachine.ConnectingcontainersacrosstwomachinesthenrequiresNetworkAddressTranslation(NAT)andportmappingforcommunication.
Dockerplugins(libnetwork)Inordertoaddressthecross-machinecommunicationissue,Dockerhasreleasednewnetworkplugins,whichjustmovedoutofexperimentalsupportaswewenttopress.Thispluginallowsnetworkstobecreatedindependentofthecontainersthemselves.Inthisway,containerscanjointhesameexistingnetworks.Throughthenewpluginarchitecture,variousdriverscanbeprovidedfordifferentnetworkusecases.
Thefirstoftheseistheoverlaydriver.Inordertocoordinateacrossmultiplehosts,theymustallagreeontheavailablenetworksandtheirtopologies.Theoverlaydriverusesadistributedkey-valuestoretosynchronizethenetworkcreationacrossmultiplehosts.
It’simportanttonotethatthepluginmechanismwillallowawiderangeofnetworkingpossibilitiesinDocker.Infact,manyofthethird-partyoptionssuchasWeavearealreadycreatingtheirownDockernetworkplugins.
WeaveWeaveprovidesanoverlaynetworkforDockercontainers.ItcanbeusedasapluginwiththenewDockernetworkplugininterface,anditisalsocompatiblewithKubernetes.Likemanyoverlaynetworks,manycriticizetheperformanceimpactoftheencapsulationoverhead.NotethattheyhaverecentlyaddedapreviewreleasewithVirtualExtensibleLAN(VXLAN)encapsulationsupport,whichgreatlyimprovesperformance.Formore
information,visit:
http://blog.weave.works/2015/06/12/weave-fast-datapath/
FlannelFlannelcomesfromCoreOSandisanetcd-backedoverlay.Flannelgivesafullsubnettoeachhost/nodeenablingasimilarpatterntotheKubernetespracticeofaroutableIPperpodorgroupofcontainers.Flannelincludesanin-kernelVXLANencapsulationmodeforbetterperformanceandhasanexperimentalmultinetworkmodesimilartotheoverlayDockerplugin.Formoreinformation,visit:
https://github.com/coreos/flannel
ProjectCalicoProjectCalicoisalayer3-basednetworkingmodelthatusesthebuilt-inroutingfunctionsoftheLinuxkernel.RoutesarepropagatedtovirtualroutersoneachhostviaBorderGatewayProtocol(BGP).Calicocanbeusedforanythingfromsmall-scaledeploystolargeInternet-scaleinstallations.Becauseitworksatalowerlevelonthenetworkstack,thereisnoneedforadditionalNAT,tunneling,oroverlays.Itcaninteractdirectlywiththeunderlyingnetworkinfrastructure.Additionally,ithasasupportfornetwork-levelACLstoprovideadditionalisolationandsecurity.Formoreinformationvisit:
http://www.projectcalico.org/
BalanceddesignIt’simportanttopointoutthebalanceKubernetesistryingtoachievebyplacingtheIPatthepodlevel.UsinguniqueIPaddressesatthehostlevelisproblematicasthenumberofcontainersgrow.Portsmustbeusedtoexposeservicesonspecificcontainersandallowexternalcommunication.Inadditiontothis,thecomplexityofrunningmultipleservicesthatmayormaynotknowabouteachother(andtheircustomports),andmanagingtheportspacebecomesabigissue.
However,assigninganIPaddresstoeachcontainercanbeoverkill.Incasesofsizablescale,overlaynetworksandNATsareneededinordertoaddresseachcontainer.Overlaynetworksaddlatency,andIPaddresseswouldbetakenupbybackendservicesaswellsincetheyneedtocommunicatewiththeirfrontendcounterparts.
Here,wereallyseeanadvantageintheabstractionsthatKubernetesprovidesattheapplicationandservicelevel.IfIhaveawebserverandadatabase,wecankeepthemonthesamepodanduseasingleIPaddress.Thewebserveranddatabasecanusethelocalinterfaceandstandardportstocommunicate,andnocustomsetupisrequired.Further,servicesonthebackendarenotneedlesslyexposedtootherapplicationstacksrunningelsewhereinthecluster(butpossiblyonthesamehost).SincethepodseesthesameIPaddressthattheapplicationsrunningwithinitsee,servicediscoverydoesnotrequireanyadditionaltranslation.
Ifyouneedtheflexibilityofanoverlaynetwork,youcanstilluseanoverlayatthepodlevel.BothWeaveandFlanneloverlays,aswellastheBGProutingProjectCalico,canbeusedwithKubernetes.
Thisisalsoveryhelpfulinthecontextofschedulingtheworkloads.Itisakeytohaveasimpleandstandardstructurefortheschedulertomatchconstraintsandunderstandwherespaceexistsonthecluster’snetworkatanygiventime.Thisisadynamicenvironmentwithavarietyofapplicationsandtasksrunning,soanyadditionalcomplexityherewillhaveripplingeffects.
Therearealsoimplicationsforservicediscovery.NewservicescomingonlinemustdetermineandregisteranIPaddressonwhichtherestoftheworld,oratleastcluster,canreachthem.IfNATisused,theserviceswillneedanadditionalmechanismtolearntheirexternallyfacingIP.
AdvancedservicesLet’sexploretheIPstrategyasitrelatestoServicesandcommunicationbetweencontainers.Ifyourecall,inChapter2,Kubernetes–CoreConceptsandConstructs,undertheServicessection,youlearnedthatKubernetesisusingkube-proxytodeterminetheproperpodIPaddressandportservingeachrequest.Behindthescenes,kube-proxyisactuallyusingvirtualIPsandiptablestomakeallthismagicwork.
Recallthatkube-proxyisrunningoneveryhost.ItsfirstdutyistomonitortheAPIfromtheKubernetesmaster.Anyupdatestoserviceswilltriggeranupdatetoiptablesfromkube-proxy.Forexample,whenanewserviceiscreated,avirtualIPaddressischosenandaruleiniptablesisset,whichwilldirectitstraffictokube-proxyviaarandomport.Thus,wenowhaveawaytocaptureservice-destinedtrafficonthisnode.Sincekube-proxyisrunningonallnodes,wehavecluster-wideresolutionfortheserviceVIP.Additionally,DNSrecordscanpointtothisvirtualIPaswell.
Nowthatwehaveahookcreatediniptables,westillneedtogetthetraffictotheservicingpods;however,theruleisonlysendingtraffictotheserviceentryinkube-proxyatthispoint.Oncekube-proxyreceivesthetrafficforaparticularservice,itmustthenforwardittoapodintheservice’spoolofcandidates.Itdoesthisusingarandomportthatwasselectedduringservicecreation.Refertothefollowingfigure(Figure3.1)foranoverviewoftheflow:
Figure3.1.Kube-proxycommunication
Atthetimeofwritingthisbook,thereareplansintheupcomingversion1.1toincludeakube-proxy,whichdoesnotrelyonserviceentryandusesonlyiptablerules.
TipItisalsopossibletoalwaysforwardtrafficfromthesameclientIPtosamebackendpod/containerusingthesessionAffinityelementinyourservicedefinition.
ExternalservicesInthelastchapter,wesawafewserviceexamples.Fortestinganddemonstrationpurposes,wewantedalltheservicestobeexternallyaccessible.Thiswasconfiguredbythetype:LoadBalancerelementinourservicedefinition.TheLoadBalancertypecreatesanexternalloadbalanceronthecloudprovider.Weshouldnotethatsupportforexternalloadbalancersvariesbyproviderasdoestheimplementation.Inourcase,weareusingGCE,sointegrationisprettysmooth.Theonlyadditionalsetupneededistoopenfirewallrulesfortheexternalserviceports.
Let’sdigalittledeeperanddoadescribeononeoftheservicesfromtheChapter2,Kubernetes–CoreConceptsandConstructs,undertheMoreonlabelssection.
$kubectldescribeservice/node-js-labels
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure3.2.Servicedescription
Intheoutput,inFigure3.2,you’llnoteseveralkeyelements.Ournamespaceissettodefault,Type:isLoadBalancer,andwehavetheexternalIPlistedunderLoadBalancerIngress:.Further,weseeEndpoints:,whichshowsustheIPsofthepodsavailabletoanswerservicerequests.
InternalservicesLet’sexploretheothertypesofserviceswecandeploy.First,bydefault,servicesareinternallyfacingonly.YoucanspecifyatypeofclusterIPtoachievethis,butifnotypeisdefined,clusterIPistheassumedtype.Let’stakealookatanexample,notethelackofthetypeelement:
apiVersion:v1
kind:Service
metadata:
name:node-js-internal
labels:
name:node-js-internal
spec:
ports:
-port:80
selector:
name:node-js
Listing3-1:nodejs-service-internal.yaml
Usethislistingtocreatetheservicedefinitionfile.You’llneedahealthyversionofthenode-jsRC(Listing2-7:nodejs-health-controller-2.yaml).Asyoucansee,theselectormatchesonthepodsnamednode-jsthatourRClaunchedinthelastchapter.Wewillcreatetheserviceandthenlistthecurrentlyrunningserviceswithafilter:
$kubectlcreate-fnodejs-service-internal.yaml
$kubectlgetservices-lname=node-js-internal
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure3.3.Internalservicelisting
Asyoucansee,wehaveanewservice,butonlyoneIP.Further,theIPaddressisnotexternallyaccessible.Wewon’tbeabletotesttheservicefromawebbrowserthistime.However,wecanusethehandykubectlexeccommandandattempttoconnectfromoneoftheotherpods.Youwillneednode-js-pod(Listing2-1:nodejs-pod.yaml)running.Then,youcanexecutethefollowingcommand:
$kubectlexecnode-js-pod—curl<node-js-internalIP>
Thisallowsustorunadockerexeccommandasifwehadashellinthenode-js-podcontainer.ItthenhitstheinternalserviceURL,whichforwardstoanypodswiththenode-jslabel.
Ifalliswell,youshouldgettherawHTMLoutputback.So,you’vesuccessfullycreatedaninternal-onlyservice.Thiscanbeusefulforbackendservicesthatyouwanttomakeavailabletoothercontainersrunninginyourcluster,butnotopentotheworldatlarge.
CustomloadbalancingAthirdtypeofserviceK8sallowsistheNodePorttype.Thistypeallowsustoexposeaservicethroughthehostorminiononaspecificport.Inthisway,wecanusetheIPaddressofanynode(minion)andaccessourserviceontheassignednodeport.Kuberneteswillassignanodeportbydefaultintherangeof3000–32767,butyoucanalsospecifyyourowncustomport.IntheexampleinListing3-2:nodejs-service-nodeport.yaml,wechooseport30001asfollows:
apiVersion:v1
kind:Service
metadata:
name:node-js-nodeport
labels:
name:node-js-nodeport
spec:
ports:
-port:80
nodeport:30001
selector:
name:node-js
type:NodePort
Listing3-2:nodejs-service-nodeport.yaml
Onceagain,createthisYAMLdefinitionfileandcreateyourserviceasfollows:
$kubectlcreate-fnodejs-service-nodeport.yaml
Theoutputshouldhaveamessagelikethis:
Figure3.4.NewGCPfirewallrule
You’llnoteamessageaboutopeningfirewallports.Similartotheexternalloadbalancertype,NodePortisexposingyourserviceexternallyusingportsonthenodes.Thiscouldbeusefulif,forexample,youwanttouseyourownloadbalancerinfrontofthenodes.Let’smakesurethatweopenthoseportsonGCPbeforewetestournewservice.
FromtheGCEVMinstanceconsole,clickonthenetworkforanyofyournodes(minions).Inmycase,itwasdefault.Underfirewallrules,wecanaddarulebyclickingAddfirewallrule.CreatearuleliketheoneshowninFigure3.5:
Figure3.5.NewGCPfirewallrule
Wecannowtestournewserviceout,byopeningabrowserandusinganIPaddressofanynode(minion)inyourcluster.Theformattotestthenewserviceis:http://<MinoionIPAddress>:<NodePort>/
Cross-nodeproxyRememberthatkube-proxyisrunningonallthenodes,soevenifthepodisnotrunningthere,trafficwillbegivenaproxytotheappropriatehost.RefertoFigure3.6foravisualonhowthetrafficflows.AusermakesarequesttoanexternalIPorURL.TherequestisservicedbyNode1inthiscase.However,thepoddoesnothappentorunonthisnode.ThisisnotaproblembecausethepodIPaddressesareroutable.So,Kube-proxysimplypassestrafficontothepodIPforthisservice.ThenetworkroutingthencompletesonNode2,wheretherequestedapplicationlives.
Figure3.6.Cross-nodetraffic
CustomportsServicesalsoallowyoutomapyourtraffictodifferentports,thenthecontainersandpodsthemselvesexpose.Wewillcreateaservicethatexposesport90andforwardstraffictoport80onthepods.Wewillcallthenode-js-90podtoreflectthecustomportnumber.Createthefollowingtwodefinitionfiles:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js-90
labels:
name:node-js-90
spec:
replicas:3
selector:
name:node-js-90
template:
metadata:
labels:
name:node-js-90
spec:
containers:
-name:node-js-90
image:jonbaier/node-express-info:latest
ports:
-containerPort:80
Listing3-3:nodejs-customPort-controller.yaml
apiVersion:v1
kind:Service
metadata:
name:node-js-90
labels:
name:node-js-90
spec:
type:LoadBalancer
ports:
-port:90
targetPort:80
selector:
name:node-js-90
Listing3-4:nodejs-customPort-service.yaml
You’llnotethatintheservicedefinition,wehaveatargetPortelement.Thiselementtellstheservicetheporttouseforpods/containersinthepool.Aswesawinpreviousexamples,ifyoudonotspecifytargetPort,itassumesthatit’sthesameportastheservice.Portisstillusedastheserviceport,butinthiscase,wearegoingtoexposetheserviceonport90whilethecontainersservecontentonport80.
CreatethisRCandserviceandopentheappropriatefirewallrules,aswedidinthelastexample.ItmaytakeamomentfortheexternalloadbalancerIPtopropagatetotheget
servicecommand.Onceitdoes,youshouldbeabletoopenandseeourfamiliarwebapplicationinabrowserusingthefollowingformat:http://<externalserviceIP>:90/
MultipleportsAnothercustomportusecaseisthatofmultipleports.Manyapplicationsexposemultipleports,suchasHTTPonport80andport8888forwebservers.Thefollowingexampleshowsourapprespondingonbothports.Onceagain,we’llalsoneedtoaddafirewallruleforthisport,aswedidforListing3-2:nodejs-service-nodeport.yamlpreviously:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js-multi
labels:
name:node-js-multi
spec:
replicas:3
selector:
name:node-js-multi
template:
metadata:
labels:
name:node-js-multi
spec:
containers:
-name:node-js-multi
image:jonbaier/node-express-multi:latest
ports:
-containerPort:80
-containerPort:8888
Listing3-5:nodejs-multicontroller.yaml
apiVersion:v1
kind:Service
metadata:
name:node-js-multi
labels:
name:node-js-multi
spec:
type:LoadBalancer
ports:
-name:http
protocol:TCP
port:80
-name:fake-admin-http
protocol:TCP
port:8888
selector:
name:node-js-multi
Listing3-6:nodejs-multiservice.yaml
NoteNotethattheapplicationandcontaineritselfmustbelisteningonbothportsforthistowork.Inthisexample,port8888isusedtorepresentafakeadmininterface.
If,forexample,youwanttolistenonport443,youwouldneedaproperSSLsocketlisteningontheserver.
Migrations,multicluster,andmoreAsyou’veseensofar,Kubernetesoffersahighlevelofflexibilityandcustomizationtocreateaserviceabstractionaroundyourcontainersrunninginthecluster.However,theremaybetimeswhereyouwanttopointtosomethingoutsideyourcluster.
Anexampleofthiswouldbeworkingwithlegacysystems,orevenapplicationsrunningonanothercluster.Inthecaseoftheformer,thisisaperfectlygoodstrategyinordertomigratetoKubernetesandcontainersingeneral.WecanbegintomanagetheserviceendpointsinKuberneteswhilestitchingthestacktogetherusingtheK8sorchestrationconcepts.Additionally,wecanevenstartbringingoverpiecesofthestack,asthefrontend,oneatatimeastheorganizationrefactorsapplicationsformicroservicesand/orcontainerization.
Toallowaccesstonon-pod–basedapplications,theservicesconstructallowsyoutouseendpointsthatareoutsidethecluster.Kubernetesisactuallycreatinganendpointresourceeverytimeyoucreateaservicethatusesselectors.TheendpointsobjectkeepstrackofthepodIPsintheloadbalancingpool.Youcanseethisbyrunningagetendpointscommandasfollows:
$kubectlgetendpoints
Youshouldseesomethingsimilartothis:
NAMEENDPOINTS
http-pd10.244.2.29:80,10.244.2.30:80,10.244.3.16:80
kubernetes10.240.0.2:443
node-js10.244.0.12:80,10.244.2.24:80,10.244.3.13:80
You’llnoteanentryforalltheserviceswecurrentlyhaverunningonourcluster.Formost,theendpointsarejusttheIPofeachpodrunninginaRC.AsImentioned,Kubernetesdoesthisautomaticallybasedontheselector.Aswescalethereplicasinacontrollerwithmatchinglabels,Kuberneteswillupdatetheendpointsautomatically.
Ifwewanttocreateaserviceforsomethingthatisnotapodandthereforehasnolabelstoselect,wecaneasilydothiswithbothaserviceandendpointdefinitionasfollows:
apiVersion:v1
kind:Service
metadata:
name:custom-service
spec:
type:LoadBalancer
ports:
-name:http
protocol:TCP
port:80
Listing3-7:nodejs-custom-service.yaml
apiVersion:v1
kind:Endpoints
metadata:
name:custom-service
subsets:
-addresses:
-IP:<X.X.X.X>
ports:
-name:http
port:80
protocol:TCP
Listing3-8:nodejs-custom-endpoint.yaml
Intheprecedingexample,you’llneedtoreplacethe<X.X.X.X>witharealIPaddresswherethenewservicecanpoint.Inmycase,IusedthepublicloadbalancerIPfromnode-js-multiservicewecreatedearlier.Goaheadandcreatetheseresourcesnow.
Ifwenowrunagetendpointscommand,wewillseethisIPaddressatport80associatedwiththecustom-serviceendpoint.Further,ifwelookattheservicedetails,wewillseetheIPlistedintheEndpointssection.
$kubectldescribeservice/custom-service
Wecantestoutthisnewservicebyopeningthecustom-serviceexternalIPfromabrowser.
CustomaddressingAnotheroptiontocustomizeservicesiswiththeclusterIPelement.Inourexamplesthisfar,we’venotspecifiedanIPaddress,whichmeansthatitchoosestheinternaladdressoftheserviceforus.However,wecanaddthiselementandchoosetheIPaddressinadvancewithsomethinglikeclusterip:10.0.125.105.
Theremaybetimeswhenyoudon’twanttoloadbalanceandwouldratherhaveDNSwithArecordsforeachpod.Forexample,softwarethatneedstoreplicatedataevenlytoallnodesmayrelyonArecordstodistributedata.Inthiscase,wecanuseanexamplelikethefollowingoneandsetclusteriptoNone.KuberneteswillnotassignanIPaddressandinsteadonlyassignArecordsinDNSforeachofthepods.IfyouareusingDNS,theserviceshouldbeavailableatnode-js-noneornode-js-none.default.cluster.localfromwithinthecluster.Wehavethefollowingcode:
apiVersion:v1
kind:Service
metadata:
name:node-js-none
labels:
name:node-js-none
spec:
clusterip:None
ports:
-port:80
selector:
name:node-js
Listing3-9:nodejs-headless-service.yaml
Testitoutafteryoucreatethisservicewiththetrustyexeccommand:
$kubectlexecnode-js-pod—curlnode-js-none
ServicediscoveryAswediscussedearlier,theKubernetesmasterkeepstrackofallservicedefinitionsandupdates.Discoverycanoccurinoneofthreeways.ThefirsttwomethodsuseLinuxenvironmentvariables.ThereissupportfortheDockerlinkstyleofenvironmentvariables,butKubernetesalsohasitsownnamingconvention.Hereisanexampleofwhatournode-jsserviceexamplemightlooklikeusingK8senvironmentvariables(noteIPswillvary):
NODE_JS_PORT_80_TCP=tcp://10.0.103.215:80
NODE_JS_PORT=tcp://10.0.103.215:80
NODE_JS_PORT_80_TCP_PROTO=tcp
NODE_JS_PORT_80_TCP_PORT=80
NODE_JS_SERVICE_HOST=10.0.103.215
NODE_JS_PORT_80_TCP_ADDR=10.0.103.215
NODE_JS_SERVICE_PORT=80
Listing3-10:Serviceenvironmentvariables
AnotheroptionfordiscoveryisthroughDNS.WhileenvironmentvariablescanbeusefulwhenDNSisnotavailable,ithasdrawbacks.Thesystemonlycreatesvariablesatcreationtime,soservicesthatcomeonlinelaterwillnotbediscoveredorwouldrequiresomeadditionaltoolingtoupdateallthesystemenvironments.
DNSDNSsolvestheissuesseenwithenvironmentvariablesbyallowingustoreferencetheservicesbytheirname.Asservicesrestart,scaleout,orappearanew,theDNSentrieswillbeupdatingandensuringthattheservicenamealwayspointstothelatestinfrastructure.DNSissetupbydefaultinmostofthesupportedproviders.
TipIfDNSissupportedbyyourprovider,butnotsetup,youcanconfigurethefollowingvariablesinyourdefaultproviderconfigwhenyoucreateyourKubernetescluster:
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.0.0.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
WithDNSactive,servicescanbeaccessedinoneoftwoforms—eithertheservicenameitself,<service-name>,orafullyqualifiednamethatincludesthenamespace,<service-name>.<namespace-name>.cluster.local.Inourexamples,itwouldlooksimilartonode-js-90ornode-js-90.default.cluster.local.
PersistentstorageLet’sswitchgearsforamomentandtalkaboutanothercoreconcept:persistentstorage.Whenyoustartmovingfromdevelopmenttoproduction,oneofthemostobviouschallengesyoufaceisthetransientnatureofcontainersthemselves.IfyourecallourdiscussionoflayeredfilesystemsinChapter1,KubernetesandContainerOperations,thetoplayeriswritable.(It’salsofrosting,whichisdelicious.)However,whenthecontainerdies,thedatagoeswithit.ThesameistrueforcrashedcontainersthatKubernetesrestarts.
Thisiswherepersistentdisks(PDs),orvolumes,comeintoplay.Apersistentvolumethatexistsoutsidethecontainerallowsustosaveourimportantdataacrosscontainersoutages.Further,ifwehaveavolumeatthepodlevel,datacanbesharedbetweencontainersinthesameapplicationstackandwithinthesamepod.
Dockeritselfhassomesupportforvolumes,butKubernetesgivesuspersistentstoragethatlastsbeyondthelifetimeofasinglecontainer.Thevolumesaretiedtopodsandliveanddiewiththosepods.Additionally,apodcanhavemultiplevolumesfromavarietyofsources.Let’stakealookatsomeofthesesources.
TemporarydisksOneoftheeasiestwaystoachieveimprovedpersistenceamidcontainercrashesanddatasharingwithinapodistousetheemptydirvolume.ThisvolumetypecanbeusedwitheitherthestoragevolumesofthenodemachineitselforanoptionalRAMdiskforhigherperformance.
Again,weimproveourpersistencebeyondasinglecontainer,butwhenapodisremoved,thedatawillbelost.MachinerebootwillalsoclearanydatafromRAM-typedisks.Theremaybetimeswhenwejustneedsomesharedtemporaryspaceorhavecontainersthatprocessdataandhanditofftoanothercontainerbeforetheydie.Whateverthecase,hereisaquickexampleofusingthistemporarydiskwiththeRAM-backedoption.
OpenyourfavoriteeditorandcreateafileliketheoneinListing3-11:storage-memory.yamlhere:
apiVersion:v1
kind:Pod
metadata:
name:memory-pd
spec:
containers:
-image:nginx:latest
ports:
-containerPort:80
name:memory-pd
volumeMounts:
-mountPath:/memory-pd
name:memory-volume
volumes:
-name:memory-volume
emptydir:
medium:Memory
Listing3-11:storage-memory.yaml
It’sprobablysecondnaturebynow,butwewillonceagainissueacreatecommandfollowedbyanexeccommandtoseethefoldersinthecontainer:
$kubectlcreate-fstorage-memory.yaml
$kubectlexecmemory-pd—ls-lh|grepmemory-pd
Thiswillgiveusabashshellinthecontaineritself.Thelscommandshowsusamemory-pdfolderatthetoplevel.Weusegreptofiltertheoutput,butyoucanrunthecommandwithout|grepmemory-pdtoseeallfolders.
Figure3.7.Temporarystorageinsideacontainer
Again,thisfolderisquitetemporaryaseverythingisstoredintheminion’sRAM.When
thenodegetsrestarted,allthefileswillbeerased.Wewilllookatamorepermanentexamplenext.
CloudvolumesManycompanieswillalreadyhavesignificantinfrastructurerunninginthepubliccloud.Luckily,Kuberneteshasnativesupportforthepersistentvolumetypesprovidedbytwoofthemostpopularproviders.
GCEpersistentdisksLet’screateanewGCEpersistentvolume.Fromtheconsole,underCompute,gotoDisks.Onthisnewscreen,clickontheNewdiskbutton.
We’llbepresentedwithascreensimilartoFigure3.8.Chooseanameforthisvolumeandgiveitabriefdescription.Makesurethatthezoneisthesameasthenodesinyourcluster.GCEPDscanonlybeattachedtomachinesinthesamezone.
Entermysite-volume-1fortheName.ChooseaSourcetypeofNone(blankdisk)andgive10(10GB)asvalueinSize(GB).Finally,clickonCreate.
Figure3.8.GCEnewpersistentdisk
ThenicethingaboutPDsonGCEisthattheyallowformountingtomultiplemachines(nodesinourcase).However,whenmountingtomultiplemachines,thevolumemustbeinread-onlymode.So,let’sfirstmountthistoasinglepod,sowecancreatesomefiles.UseListing3-12:storage-gce.yamlasfollowstocreateapodthatwillmountthediskinread/writemode:
apiVersion:v1
kind:Pod
metadata:
name:test-gce
spec:
containers:
-image:nginx:latest
ports:
-containerPort:80
name:test-gce
volumeMounts:
-mountPath:/usr/share/nginx/html
name:gce-pd
volumes:
-name:gce-pd
gcePersistentDisk:
pdName:mysite-volume-1
fsType:ext4
Listing3-12:storage-gce.yaml
First,let’sissueacreatecommandfollowedbyadescribetofindoutwhichnodeitisrunningon.NotethenodeandsavethepodIPaddressforlater.Then,openanSSHsessionintothenode.
$kubectlcreate-fstorage-gce.yaml
$kubectldescribepod/test-gce
$gcloudcompute--project"<YourprojectID>"ssh--zone"<yourgcezone>"
"<Noderunningtest-gcepod>"
Sincewe’vealreadylookedatthevolumefrominsidetherunningcontainer,let’saccessitdirectlyfromtheminionnodeitselfthistime.Wewillrunadfcommandtoseewhereitismounted:
$df-h|grepmysite-volume-1
Asyoucansee,theGCEvolumeismounteddirectlytothenodeitself.Wecanusethemountpathlistedintheoutputoftheearlierdfcommand.Usecdtochangetothefoldernow.Then,createanewfilenamedindex.htmlwithyourfavoriteeditor:
$cd/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/mysite-volume-1
$viindex.html
EnteraquaintmessagesuchasHellofrommyGCEPD!.Nowsavethefileandexittheeditor.IfyourecallfromListing3-12:storage-gce.yaml,thePDismounteddirectlytotheNGINXhtmldirectory.So,let’stestthisoutwhilewestillhavetheSSHsessionopenonthenode.DoasimplecurlcommandtothepodIPwewrotedownearlier.
$curl<PodIPfromDescribe>
YoushouldseeHellofrommyGCEPD!orwhatevermessageyousavedintheindex.htmlfile.Inareal-worldscenario,wecouldusethevolumeforanentirewebsiteoranyothercentralstorage.Let’stakealookatrunningasetofloadbalancedwebserversallpointingtothesamevolume.
First,leavetheSSHsessionwithexit.Beforeweproceed,wewillneedtoremoveourtest-gcepodsothatthevolumecanbemountedread-onlyacrossanumberofnodes.
$kubectldeletepod/test-gce
NowwecancreateaRCthatwillrunthreewebserversallmountingthesamepersistentvolumeasfollows:
apiVersion:v1
kind:ReplicationController
metadata:
name:http-pd
labels:
name:http-pd
spec:
replicas:3
selector:
name:http-pd
template:
metadata:
name:http-pd
spec:
containers:
-image:nginx:latest
ports:
-containerPort:80
name:http-pd
volumeMounts:
-mountPath:/usr/share/nginx/html
name:gce-pd
volumes:
-name:gce-pd
gcePersistentDisk:
pdName:mysite-volume-1
fsType:ext4
readOnly:true
Listing3-13:http-pd-controller.yaml
Let’salsocreateanexternalservice,sowecanseeitfromoutsidethecluster:
apiVersion:v1
kind:Service
metadata:
name:http-pd
labels:
name:http-pd
spec:
type:LoadBalancer
ports:
-name:http
protocol:TCP
port:80
selector:
name:http-pd
Listing3-14:http-pd-service.yaml
Goaheadandcreatethesetworesourcesnow.WaitafewmomentsfortheexternalIPtogetassigned.Afterthis,adescribecommandwillgiveustheIPwecanuseinabrowser:
$kubectldescribeservice/http-pd
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure3.9.K8sservicewithGCEPDsharedacrossthreepods
TypetheIPaddressintoabrowser,andyoushouldseeyourfamiliarindex.htmlfileshowupwiththetextweenteredpreviously!
AWSElasticBlockStoreK8salsosupportsAWSElasticBlockStore(EBS)volumes.LiketheGCEPDs,EBSvolumesarerequiredtobeattachedtoaninstancerunninginthesameavailabilityzone.AfurtherlimitationisthatEBScanonlybemountedtoasingleinstanceatonetime.
Forbrevity,wewillnotwalkthroughanAWSexample,butasampleYAMLfileisincludedtogetyoustarted.Again,remembertocreatetheEBSvolumebeforeyourpod.
apiVersion:v1
kind:Pod
metadata:
name:test-aws
spec:
containers:
-image:nginx:latest
ports:
-containerPort:80
name:test-aws
volumeMounts:
-mountPath:/usr/share/nginx/html
name:aws-pd
volumes:
-name:aws-pd
awsElasticBlockStore:
volumeID:aws://<availability-zone>/<volume-id>
fsType:ext4
Listing3-15:storage-aws.yaml
OtherPDoptionsKubernetessupportsavarietyofothertypesofpersistentstorage.Afulllistcanbefoundhere:
http://kubernetes.io/v1.0/docs/user-guide/volumes.html#types-of-volumes
Hereareafewthatmaybeofparticularinterest:
nfs:ThistypeallowsustomountaNetworkFileShare(NFS),whichcanbeveryusefulforbothpersistingthedataandsharingitacrosstheinfrastructuregitrepo:Asyoumighthaveguessed,thisoptionclonesaGitrepointoananewandemptyfolder
MultitenancyKubernetesalsohasanadditionalconstructforisolationattheclusterlevel.Inmostcases,youcanrunKubernetesandneverworryaboutnamespaces;everythingwillruninthedefaultnamespaceifnotspecified.However,incaseswhereyourunmultitenancycommunitiesorwantbroad-scalesegregationandisolationoftheclusterresources,namespacescanbeusedtothisend.
Tostart,Kuberneteshastwonamespaces:defaultandkube-system.kube-systemisusedforallthesystem-levelcontainerswesawinChapter1,KubernetesandContainerOperations,undertheServicesrunningontheminionssection.TheUI,logging,DNS,andsoonareallrununderkube-system.Everythingelsetheusercreatesrunsinthedefaultnamespace.However,ourresourcedefinitionfilescanoptionallyspecifyacustomnamespace.Forthesakeofexperimenting,let’stakealookathowtobuildanewnamespace.
First,we’llneedtocreateanamespacedefinitionfileliketheoneinthislisting:
apiVersion:v1
kind:Namespace
metadata:
name:test
Listing3-16:test-ns.yaml
Wecangoaheadandcreatethisfilewithourhandycreatecommand:
$kubectlcreate-ftest-ns.yaml
Nowwecancreateresourcesthatusethetestnamespace.Thefollowingisanexampleofapodusingthisnewnamespace.Wehavethefollowing:
apiVersion:v1
kind:Pod
metadata:
name:utility
namespace:test
spec:
containers:
-image:debian:latest
command:
-sleep
-"3600"
name:utility
Listing3-17:ns-pod.yaml
Whilethepodcanstillaccessservicesinothernamespaces,itwillneedtousethelongDNSformof<service-name>.<namespace-name>.cluster.local.Forexample,ifyouweretoruncommandfrominsidethecontainerinListing3-17:ns-pod.yaml,youcouldusehttp-pd.default.cluster.localtoaccessthePDexamplefromListing3-14:http-pd-service.yaml.
LimitsLet’sinspectournewnamespaceabitmore.Runthedescribecommandasfollows:
$kubectldescribenamespace/test
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure3.10.Namespacedescribe
Kubernetesallowsyoutobothlimittheresourcesusedbyindividualpodsorcontainersandtheresourcesusedbytheoverallnamespaceusingquotas.You’llnotethattherearenoresourcelimitsorquotascurrentlysetonthetestnamespace.
Supposewewanttolimitthefootprintofthisnewnamespace;wecansetquotassuchasthefollowing:
apiVersion:v1
kind:ResourceQuota
metadata:
name:test-quotas
namespace:test
spec:
hard:
pods:3
services:1
replicationcontrollers:1
Listing3-18:quota.yaml
NoteNotethatinreality,namespaceswouldbeforlargerapplicationcommunitiesandwouldprobablyneverhavequotasthislow.Iamusingthisinordertoeaseillustrationofthecapabilityintheexample.
Here,wewillcreateaquotaof3pods,1RC,and1serviceforthetestnamespace.Asyouprobablyguessed,thisisexecutedonceagainbyourtrustycreatecommand:
$kubectlcreate-fquota.yaml
Nowthatwehavethatinplace,let’susedescribeonthenamespaceasfollows:
$kubectldescribenamespace/test
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure3.11.Namespacedescribeafterquotaisset
You’llnotethatwenowhavesomevalueslistedinthequotasectionandthelimitssectionisstillblank.WealsohaveaUsedcolumn,whichletsusknowhowclosetothelimitsweareatthemoment.Let’strytospinupafewpodsusingthefollowingdefinition:
apiVersion:v1
kind:ReplicationController
metadata:
name:busybox-ns
namespace:test
labels:
name:busybox-ns
spec:
replicas:4
selector:
name:busybox-ns
template:
metadata:
labels:
name:busybox-ns
spec:
containers:
-name:busybox-ns
image:busybox
command:
-sleep
-"3600"
Listing3-19:busybox-ns.yaml
You’llnotethatwearecreatingfourreplicasofthisbasicpod.AfterusingcreatetobuildthisRC,runthedescribecommandonthetestnamespaceoncemore.You’llnotethattheusedvaluesforpodsandRCsareattheirmax.However,weaskedforfourreplicasandonlyseethreepodsinuse.
Let’sseewhat’shappeningwithourRC.Youmighttempttodothatwiththecommandhere:
kubectldescriberc/busybox-ns
However,ifyoutry,you’llbedisparagedtoseeanotfoundmessagefromtheserver.ThisisbecausewecreatedthisRCinanewnamespaceandkubectlassumesthedefaultnamespaceifnotspecified.Thismeansthatweneedtospecify--namepsace=testwitheverycommandwhenwewishtoaccessresourcesinthetestnamespace.
TipWecanalsosetthecurrentnamespacebyworkingwiththecontextsettings.First,weneedtofindourcurrentcontext,whichisfoundwiththefollowingcommand:
$kubectlconfigview|grepcurrent-context
Next,wecantakethatcontextandsetthenamespacevariablelikethefollowing:
$kubectlconfigset-context<CurrentContext>--namespace=test
Nowyoucanrunthekubectlcommandwithouttheneedtospecifythenamespace.Justremembertoswitchbackwhenyouwanttolookattheresourcesrunninginyourdefaultnamespace.
Runthecommandwiththenamespacespecifiedlikeso.Ifyou’vesetyourcurrentnamespaceasdemonstratedinthetipbox,youcanleaveoffthe--namespaceargument:
$kubectldescriberc/busybox-ns--namespace=test
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure3.12.Namespacequotas
Asyoucanseeintheprecedingimage,thefirstthreepodsweresuccessfullycreated,butourfinalonefailswiththeerrorLimitedto3pods.
Thisisaneasywaytosetlimitsforresourcespartitionedoutatacommunityscale.It’sworthnotingthatyoucanalsosetquotasforCPU,memory,persistentvolumes,and
secrets.Additionally,limitsworksimilartoquota,buttheysetthelimitforeachpodorcontainerwithinthenamespace.
SummaryWetookadeeperlookintonetworkingandservicesinKubernetes.YoushouldnowunderstandhownetworkingcommunicationsaredesignedinK8sandfeelcomfortableaccessingyourservicesinternallyandexternally.Wesawhowkube-proxybalancestrafficbothlocallyandacrossthecluster.WealsolookedbrieflyathowDNSandservicediscoveryisachievedinKubernetes.Inthelaterportionofthechapter,weexploredavarietyofpersistentstorageoptions.Wefinishedoffwithquicklookatnamespaceandisolationformultitenancy.
Footnotes1http://www.wired.com/2015/06/google-reveals-secret-gear-connects-online-empire/
Chapter4.UpdatesandGradualRolloutsThischapterwillexpanduponthecoreconcepts,whichshowthereaderhowtorolloutupdatesandtestnewfeaturesoftheirapplicationwithminimaldisruptiontouptime.Itwillcoverthebasicsofdoingapplicationupdates,gradualrollouts,andA/Btesting.Inaddition,wewilllookatscalingtheKubernetesclusteritself.
Thischapterwilldiscussthefollowingtopics:
ApplicationscalingRollingupdatesA/BtestingScalingupyourcluster
ExamplesetupBeforewestartexploringthevariouscapabilitiesbuiltintoKubernetesforscalingandupdates,wewillneedanewexampleenvironment.Wearegoingtouseavariationofourpreviouscontainerimagewithabluebackground(refertoFigure4.2foracomparison).Wehavethefollowingcode:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js-scale
labels:
name:node-js-scale
spec:
replicas:1
selector:
name:node-js-scale
template:
metadata:
labels:
name:node-js-scale
spec:
containers:
-name:node-js-scale
image:jonbaier/pod-scaling:0.1
ports:
-containerPort:80
Listing4-1:pod-scaling-controller.yaml
apiVersion:v1
kind:Service
metadata:
name:node-js-scale
labels:
name:node-js-scale
spec:
type:LoadBalancer
sessionAffinity:ClientIP
ports:
-port:80
selector:
name:node-js-scale
Listing4-2:pod-scaling-service.yaml
Createtheseserviceswiththefollowingcommands:
$kubectlcreate–fpod-scaling-controller.yaml
$kubectlcreate–fpod-scaling-service.yaml
ScalingupOvertime,asyourunyourapplicationsintheKubernetescluster,youwillfindthatsomeapplicationsneedmoreresources,whereasotherscanmanagewithfewerresources.InsteadofremovingtheentireRC(andassociatedpods),wewantamoreseamlesswaytoscaleourapplicationupanddown.
Thankfully,Kubernetesincludesascalecommand,whichissuitedspecificallytothispurpose.Inournewexample,wehaveonlyonereplicarunning.Youcancheckthiswithagetpodscommand.
$kubectlgetpods-lname=node-js-scale
Let’stryscalingthatuptothreewiththefollowingcommand:
$kubectlscale--replicas=3rc/node-js-scale
Ifallgoeswell,you’llsimplyseethewordscaledontheoutputofyourterminalwindow.
TipOptionally,youcanspecifythe--current-replicasflagasaverificationstep.Thescalingwillonlyoccuriftheactualnumberofreplicascurrentlyrunningmatchesthiscount.
Afterlistingourpodsonceagain,weshouldnowseethreepodsrunningwithanamesimilartonode-js-scale-XXXXX,wheretheXsarearandomstring.
Youcanalsousethescalecommandtoreducethenumberofreplicas.Ineithercase,thescalecommandaddsorremovesthenecessarypodreplicas,andtheserviceautomaticallyupdatesandbalancesacrossneworremainingreplicas.
SmoothupdatesThescalingofourapplicationupanddownasourresourcedemandschangeisusefulformanyproductionscenarios,butwhataboutsimpleapplicationupdates?Anyproductionsystemwillhavecodeupdates,patches,andfeatureadditions.Thesecouldbeoccurringmonthly,weekly,orevendaily.Makingsurethatwehaveareliablewaytopushoutthesechangeswithoutinterruptiontoourusersisaparamountconsideration.
Onceagain,webenefitfromtheyearsofexperiencetheKubernetessystemisbuilton.Thereisabuilt-insupportforrollingupdateswiththe1.0version.Therolling-updatecommandallowsustoupdateentireRCsorjusttheunderlyingDockerimageusedbyeachreplica.Wecanalsospecifyanupdateinterval,whichwillallowustoupdateonepodatatimeandwaituntilproceedingtothenext.
Let’stakeourscalingexampleandperformarollingupdatetothe0.2versionofourcontainerimage.Wewilluseanupdateintervalof2minutes,sowecanwatchtheprocessasithappensinthefollowingway:
$kubectlrolling-updatenode-js-scale--image=jonbaier/pod-scaling:0.2--
update-period="2m"
YoushouldseesometextaboutcreatinganewRCnamednode-js-scale-XXXXX,wheretheXswillbearandomstringofnumbersandletters.Inaddition,youwillseethebeginningofaloopthatisstartingonereplicaofthenewversionandremovingonefromtheexistingRC.ThisprocesswillcontinueuntilthenewRChasthefullcountofreplicasrunning.
Ifwewanttofollowalonginrealtime,wecanopenanotherterminalwindowandusethegetpodscommand,alongwithalabelfilter,toseewhat’shappening.
$kubectlgetpods-lname=node-js-scale
Thiscommandwillfilterforpodswithnode-js-scaleinthename.Ifyourunthisafterissuingtherolling-updatecommand,youshouldseeseveralpodsrunningasitcreatesnewversionsandremovestheoldonesonebyone.
Thefulloutputofthepreviousrolling-updatecommandshouldlooksomethinglikeFigure4.1,asfollows:
Figure4.1.Thescalingoutput
Aswecanseehere,KubernetesisfirstcreatinganewRCnamednode-js-scale-10ea08ff9a118ac6a93f85547ed28f6.K8sthenloopsthroughonebyone.Creatinganewpodinthenewcontrollerandremovingonefromtheold.Thiscontinuesuntilthenewcontrollerhasthefullreplicacountandtheoldoneisatzero.Afterthis,theoldcontrollerisdeletedandthenewoneisrenamedtotheoriginalcontrollername.
Ifyourunagetpodscommandnow,you’llnotethatthepodsstillallhavealongername.Alternatively,wecouldhavespecifiedthenameofanewcontrollerinthecommand,andKuberneteswillcreateanewRCandpodsusingthatname.Onceagain,thecontrolleroftheoldnamesimplydisappearsafterupdatingiscomplete.Irecommendspecifyinganewnamefortheupdatedcontrollertoavoidconfusioninyourpodnamingdowntheline.Thesameupdatecommandwiththismethodwouldlooklikethis:
$kubectlrolling-updatenode-js-scalenode-js-scale-v2.0--
image=jonbaier/pod-scaling:0.2--update-period="2m"
UsingthestaticexternalIPaddressfromtheservicewecreatedinthefirstsection,wecanopentheserviceinabrowser.Weshouldseeourstandardcontainerinformationpage.However,you’llnotethatthetitlenowsaysPodScalingv0.2andthebackgroundislightyellow.
Figure4.2.v0.1andv0.2(sidebyside)
It’sworthnotingthatduringtheentireupdateprocess,we’veonlybeenlookingatpodsandRCs.Wedidn’tdoanythingwithourservice,buttheserviceisstillrunningfineandnowdirectingtothenewversionofourpods.Thisisbecauseourserviceisusinglabelselectorsformembership.Becausebothouroldandnewreplicasusethesamelabels,theservicehasnoproblemusingthenewpodstoservicerequests.Theupdatesaredoneonthepodsonebyone,soit’sseamlessfortheusersoftheservice.
Testing,releases,andcutoversTherollingupdatefeaturecanworkwellforasimpleblue-greendeploymentscenario.However,inareal-worldblue-greendeploymentwithastackofmultipleapplications,therecanbeavarietyofinterdependenciesthatrequirein-depthtesting.Theupdate-periodcommandallowsustoaddatimeoutflagwheresometestingcanbedone,butthiswillnotalwaysbesatisfactoryfortestingpurposes.
Similarly,youmaywantpartialchangestopersistforalongertimeandallthewayuptotheloadbalancerorservicelevel.Forexample,youwishtoA/Btestanewuserinterfacefeaturewithaportionofyourusers.Anotherexampleisrunningacanaryrelease(areplicainthiscase)ofyourapplicationonnewinfrastructurelikeanewlyaddedclusternode.
Let’stakealookatanA/Btestingexample.Forthisexample,wewillneedtocreateanewservicethatusessessionAffinity.WewillsettheaffinitytoClientIP,whichwillallowustoforwardclientstothesamebackendpod.Thisisakeyifwewantaportionofouruserstoseeoneversionwhileothersseeanother:
apiVersion:v1
kind:Service
metadata:
name:node-js-scale-ab
labels:
service:node-js-scale-ab
spec:
type:LoadBalancer
ports:
-port:80
sessionAffinity:ClientIP
selector:
service:node-js-scale-ab
Listing4-3:pod-AB-service.yaml
Createthisserviceasusualwiththecreatecommandasfollows:
$kubectlcreate-fpod-AB-service.yaml
Thiswillcreateaservicethatwillpointtoourpodsrunningbothversion0.2and0.3oftheapplication.Next,wewillcreatethetwoRCswhichcreatetworeplicasoftheapplication.Onesetwillhaveversion0.2oftheapplication,andtheotherwillhaveversion0.3,asshownhere:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js-scale-a
labels:
name:node-js-scale-a
version:"0.2"
service:node-js-scale-ab
spec:
replicas:2
selector:
name:node-js-scale-a
version:"0.2"
service:node-js-scale-ab
template:
metadata:
labels:
name:node-js-scale-a
version:"0.2"
service:node-js-scale-ab
spec:
containers:
-name:node-js-scale
image:jonbaier/pod-scaling:0.2
ports:
-containerPort:80
livenessProbe:
#AnHTTPhealthcheck
httpGet:
path:/
port:80
initialDelaySeconds:30
timeoutSeconds:5
readinessProbe:
#AnHTTPhealthcheck
httpGet:
path:/
port:80
initialDelaySeconds:30
timeoutSeconds:1
Listing4-4:pod-A-controller.yaml
apiVersion:v1
kind:ReplicationController
metadata:
name:node-js-scale-b
labels:
name:node-js-scale-b
version:"0.3"
service:node-js-scale-ab
spec:
replicas:2
selector:
name:node-js-scale-b
version:"0.3"
service:node-js-scale-ab
template:
metadata:
labels:
name:node-js-scale-b
version:"0.3"
service:node-js-scale-ab
spec:
containers:
-name:node-js-scale
image:jonbaier/pod-scaling:0.3
ports:
-containerPort:80
livenessProbe:
#AnHTTPhealthcheck
httpGet:
path:/
port:80
initialDelaySeconds:30
timeoutSeconds:5
readinessProbe:
#AnHTTPhealthcheck
httpGet:
path:/
port:80
initialDelaySeconds:30
timeoutSeconds:1
Listing4-5:pod-B-controller.yaml
Notethatwehavethesameservicelabel,sothesereplicaswillalsobeaddedtotheservicepoolbasedonthisselector.WealsohavelivenessProbeandreadinessProbedefinedtomakesurethatournewversionisworkingasexpected.Again,usethecreatecommandtospinupthecontroller:
$kubectlcreate-fpod-A-controller.yaml
$kubectlcreate-fpod-B-controller.yaml
Nowwehaveaservicebalancingtobothversionsofourapp.InatrueA/Btest,wewouldnowwanttostartcollectingmetricsonthevisittoeachversion.Again,wehavethesessionAffinitysettoClientIP,soallrequestswillgotothesamepod.Someuserswillseev0.2,andsomewillseev0.3.
NoteBecausewehavesessionAffinityturnedon,yourtestwilllikelyshowthesameversioneverytime.Thisisexpected,andyouwouldneedtoattemptaconnectionfrommultipleIPaddressestoseebothuserexperienceswitheachversion.
Sincetheversionsareeachontheirownpod,onecaneasilyseparateloggingandevenaddaloggingcontainertothepoddefinitionforasidecarloggingpattern.Forbrevity,wewillnotcoverthatsetupinthisbook,butwewilllookatsomeoftheloggingtoolsinChapter6,MonitoringandLogging.
Wecanstarttoseehowthisprocesswouldbeusefulforacanaryreleaseoramanualblue-greendeployment.Wecanalsoseehoweasyitistolaunchanewversionandslowlytransitionovertothenewrelease.
Let’slookatabasictransitionquickly.It’sreallyassimpleasafewscalecommands,whichareasfollows:
$kubectlscale--replicas=3rc/node-js-scale-b
$kubectlscale--replicas=1rc/node-js-scale-a
$kubectlscale--replicas=4rc/node-js-scale-b
$kubectlscale--replicas=0rc/node-js-scale-a
TipUsethegetpodscommandcombinedwith–lfilterinbetweenscalecommandstowatchthetransitionasithappens.
Nowwehavefullytransitionedovertoversion0.3(node-js-scale-b).Alluserswillnowseetheversion0.3ofthesite.Wehavefourreplicasofversion0.3and0of0.2.Ifyourunagetrccommand,youwillnoticethatwestillhaveaRCfor0.2(node-js-scale-a).Asafinalcleanup,wecanremovethatcontrollercompletelyasfollows:
$kubectldeleterc/node-js-scale-a
TipInthenewlyreleasedversion1.1,K8shasanew“HorizontalPodAutoscaler”constructwhichallowsyoutoautomaticallyscalepodsbasedonCPUutilization.
GrowingyourclusterAllthesetechniquesaregreatforthescalingoftheapplication,butwhatabouttheclusteritself.Atsomepoint,youwillpackthenodesfullandneedmoreresourcestoschedulenewpodsforyourworkloads.
TipWhenyoucreateyourcluster,youcancustomizethestartingnumberof(minions)nodeswiththeNUM_MINIONSenvironmentvariable.Bydefault,itissetto4.Thefollowingexampleshowshowtosetitto5beforerunningkube-up.sh:
$exportNUM_MINIONS=5
Bearinmindthatchangingthisaftertheclusterisstartedwillhavenoeffect.Youwouldneedtoteardowntheclusterandcreateitonceagain.Thus,thissectionwillshowyouhowtoaddnodestoanexistingclusterwithoutrebuildingit.
ScalinguptheclusteronGCEScalingupyourclusteronGCEisactuallyquiteeasy.TheexistingplumbingusesmanagedinstancegroupsinGCE,whichallowyoutoeasilyaddmoremachinesofastandardconfigurationtothegroupviaaninstancetemplate.
YoucanseethistemplateeasilyintheGCEconsole.First,opentheconsole;bydefault,thisshouldopenyourdefaultprojectconsole.IfyouareusinganotherprojectforyourKuberenetescluster,simplyselectitfromtheprojectdropdownatthetopofthepage.
OnthesidepanelunderComputeandthenComputeEngine,selectInstancetemplates.Youshouldseeatemplatetitledkuberenetes-minion-template.Notethatthenamecouldvaryslightlyifyou’vecustomizedyourclusternamingsettings.Clickonthattemplatetoseethedetails.Refertothefollowingscreenshot:
Figure4.3.TheGCEInstancetemplateforminions
You’llseeanumberofsettings,butthemeatofthetemplateisunderCustommetadata.Here,youwillseeanumberofenvironmentvariablesandalsoastartupscriptthatisrunafteranewmachineinstanceiscreated.Thesearethecorecomponentsthatallowustocreatenewmachinesandhavethemautomaticallyaddedtotheavailableclusternodes.
Becausethetemplatefornewmachinesisalreadycreated,itisverysimpletoscaleoutourclusterinGCE.SimplygototheInstancegroupslocatedrightabovetheInstancetemplateslinkonthesidepanel.Again,youshouldseeagrouptitledkubernetes-minion-grouporsomethingsimilar.Clickonthatgrouptoseethedetails,asshowninthefollowingscreenshot:
Figure4.4.TheGCEInstancegroupforminions
You’llseeapagewithaCPUmetricsgraphandfourinstanceslistedhere.Bydefault,theclustercreatesfournodes.WecanmodifythisgroupbyclickingtheEditgroupbuttonatthetopofthepage.
Figure4.5.TheGCEInstancegroupeditpage
Youshouldseekubernetes-minion-templateselectedinInstancetemplatethatwereviewedamomentago.You’llalsoseeanAutoscalingsetting,whichisOffbydefaultandaninstancecountof4.Simply,incrementthisto5andclickonSave.You’llbetakenbacktothegroupdetailspageandseeapop-updialogshowingthependingchanges.
Inafewminutes,you’llhaveanewinstancelistedonthedetailspage.Wecantestthatthisisreadybyusingthegetnodescommandfromthecommandline:
$kubectlgetnodes
AutoscalingandscalingdownIntheprecedingexample,weleftautoscalingturnedoff.However,theremaybesomecaseswhereyouwanttoautomaticallyscaleyourclusterupanddown.Turningonautoscalingwillallowyoutochooseametrictomonitorandscaleon.Aminimumandmaximumnumberofinstancescanbedefinedaswellasacooldownperiodbetweenactions.FormoreinformationonautoscalinginGCE,refertothelinkhttps://cloud.google.com/compute/docs/autoscaler/?hl=en_US#scaling_based_on_cpu_utilization.
NoteAwordofcautiononautoscalingandscaledowningeneral
First,ifwerepeattheearlierprocessanddecreasethecountdowntofour,GCEwillremoveonenode.However,itwillnotnecessarilybethenodeyoujustadded.Thegoodnewsisthatpodswillberescheduledontheremainingnodes.However,itcanonlyreschedulewhereresourcesareavailable.Ifyouareclosetofullcapacityandshutdowna
node,thereisagoodchancethatsomepodswillnothaveaplacetoberescheduled.Inaddition,thisisnotalivemigration,soanyapplicationstatewillbelostinthetransition.Thebottomlineisthatyoushouldcarefullyconsidertheimplicationsbeforescalingdownorimplementinganautoscalingscheme.
ScalinguptheclusteronAWSTheAWSprovidercodealsomakesitveryeasytoscaleupyourcluster.SimilartoGCE,theAWSsetupusesautoscalinggroupstocreatethedefaultfourminionnodes.
ThiscanalsobeeasilymodifiedusingtheCLIorthewebconsole.Intheconsole,fromtheEC2page,simplygototheAutoScalingGroupssectionatthebottomofthemenuontheleft.Youshouldseeanamesimilartokubernetes-minion-group.SelectthatgroupandyouwillseedetailsasshowninFigure4.6:
Figure4.6.Kubernetesminionautoscalingdetails
WecanscalethisgroupupeasilybyclickingEdit.Then,changetheDesired,Min,andMaxvaluesto5andclickonSave.Inafewminutes,you’llhavethefifthnodeavailable.Youcanonceagaincheckthisusingthegetnodescommand.
Scalingdownisthesameprocess,butrememberthatwediscussedthesameconsiderationsinthepreviousScalingtheclusteronGCEsection.Workloadscouldgetabandonedorattheveryleastunexpectedlyrestarted.
ScalingmanuallyForotherproviders,creatingnewminionsmaynotbeanautomatedprocess.Dependingonyourprovider,you’llneedtoperformvariousmanualsteps.Itcanbehelpfultolookattheprovider-specificscriptsundertheclusterdirectory.
SummaryWeshouldnowbeabitmorecomfortablewiththebasicsofapplicationscalinginKubernetes.Wealsolookedatthebuilt-infunctionsinordertorollupdatesaswellamanualprocessfortestingandslowlyintegratingupdates.Finally,wetookalookatscalingthenodesofourunderlyingclusterandincreasingoverallcapacityforourKubernetesresources.
Chapter5.ContinuousDeliveryThischapterwillshowthereaderhowtointegratetheirbuildpipelineanddeploymentswithaKubernetescluster.ItwillcovertheconceptofusingGulp.jsandJenkinsinconjunctionwithyourKubernetescluster.
Thischapterwilldiscussthefollowingtopics:
IntegrationwithcontinuousdeploymentpipelineUsingGulp.jswithKubernetesIntegratingJenkinswithKubernetes
IntegrationwithcontinuousdeliveryContinuousintegrationanddeliveryarekeycomponentstomoderndevelopmentshops.Speedtomarketormean-time-to-revenuearecrucialforanycompanythatiscreatingtheirownsoftware.We’llseehowKubernetescanhelpyou.
CI/CD(shortforContinuousIntegration/ContinuousDelivery)oftenrequiresephemeralbuildandtestserverstobeavailablewheneverchangesarepushedtothecoderepository.DockerandKubernetesarewellsuitedforthistaskasit’seasytocreatecontainersinafewsecondsandjustaseasytoremovethemafterbuildsarerun.Inaddition,ifyoualreadyhavealargeportionofinfrastructureavailableonyourcluster,itcanmakesensetoutilizetheidlecapacityforbuildsandtesting.
Inthischapter,wewillexploretwopopulartoolsusedinbuildinganddeployingsoftware.Gulp.jsisasimpletaskrunnerusedtoautomatethebuildprocessusingJavaScriptandNode.js.Jenkinsisafully-fledgedcontinuousintegrationserver.
Gulp.jsGulp.jsgivesustheframeworktodoBuildascode.SimilartoInfrastructureascode,thisallowsustoprogrammaticallydefineourbuildprocess.WewillwalkthroughashortexampletodemonstratehowyoucancreateacompleteworkflowfromaDockerimagebuildtothefinalKubernetesService.
PrerequisitesForthissection,youwillneedaNodeJSenvironmentinstalledandreadyincludingthenodepackagemanage(npm).Ifyoudonotalreadyhavethesepackagesinstalled,youcanfindinstructionsathttps://docs.npmjs.com/getting-started/installing-node.
YoucancheckwhetherNodeJSisinstalledcorrectlywithanode–vcommand.
You’llalsoneedtheDockerCLIandaDockerHubaccounttopushanewimage.YoucanfindinstructionstoinstalltheDockerCLIathttps://docs.docker.com/installation/.
YoucaneasilycreateaDockerHubaccountathttps://hub.docker.com/.
Afteryouhaveyourcredentials,youcanloginwiththeCLIusing$dockerlogin.
GulpbuildexampleLet’sstartbycreatingaprojectdirectorynamednode-gulp:
$mkdirnode-gulp
$cdnode-gulp
Next,wewillinstallthegulppackageandcheckwhetherit’sreadybyrunningthenpmcommandwiththeversionflagasfollows:
$npminstall-ggulp
Youmayneedtoopenanewterminalwindowtomakesurethatgulpisonyourpath.Also,makesuretonavigatebacktoyournode-gulpdirectory:
$gulp–v
Next,wewillinstallgulplocallyinourprojectfolderaswellasthegulp-gitandgulp-shellpluginsasfollows:
$npminstall--save-devgulp
$npminstallgulp-git–save
$npminstall--save-devgulp-shell
Finally,weneedtocreateaKubernetescontrollerandservicedefinitionfileaswellasagulpfile.jstorunallourtasks.Again,thesefilesareavailableinthebookfilebundleifyouwishtocopytheminstead.Refertothefollowingcode:
apiVersion:v1
kind:ReplicationController
metadata:
name:node-gulp
labels:
name:node-gulp
spec:
replicas:1
selector:
name:node-gulp
template:
metadata:
labels:
name:node-gulp
spec:
containers:
-name:node-gulp
image:<yourusername>/node-gulp:latest
imagePullPolicy:Always
ports:
-containerPort:80
Listing5-1:node-gulp-controller.yaml
Asyoucansee,wehaveabasiccontroller.Youwillneedtoreplace<yourusername>/node-gulp:latestwithyourusername:
apiVersion:v1
kind:Service
metadata:
name:node-gulp
labels:
name:node-gulp
spec:
type:LoadBalancer
ports:
-name:http
protocol:TCP
port:80
selector:
name:node-gulp
Listing5-2:node-gulp-service.yaml
Next,wehaveasimpleservicethatselectsthepodsfromourcontrollerandcreatesanexternalloadbalancerforaccessasfollows:
vargulp=require('gulp');
vargit=require('gulp-git');
varshell=require('gulp-shell');
//Clonearemoterepo
gulp.task('clone',function(){
returngit.clone('https://github.com/jonbaierCTP/getting-started-with-
kubernetes.git',function(err){
if(err)throwerr;
});
});
//Updatecodebase
gulp.task('pull',function(){
returngit.pull('origin','master',{cwd:'./getting-started-with-
kubernetes'},function(err){
if(err)throwerr;
});
});
//BuildDockerImage
gulp.task('docker-build',shell.task([
'dockerbuild-t<yourusername>/node-gulp./getting-started-with-
kubernetes/docker-image-source/container-info/',
'dockerpush<yourusername>/node-gulp'
]));
//RunNewPod
gulp.task('create-kube-pod',shell.task([
'kubectlcreate-fnode-gulp-controller.yaml',
'kubectlcreate-fnode-gulp-service.yaml'
]));
//UpdatePod
gulp.task('update-kube-pod',shell.task([
'kubectldelete-fnode-gulp-controller.yaml',
'kubectlcreate-fnode-gulp-controller.yaml'
]));
Listing5-3:gulpfile.js
Finally,wehavethegulpfile.jsfile.Thisiswhereallourbuildtasksaredefined.Again,fillinyourusernameinboththe<yourusername>/node-gulpsections.
Lookingthroughthefile,first,theclonetaskdownloadsourimagesourcecodefromGitHub.Thepulltasksexecuteagitpullontheclonedrepository.Next,thedocker-buildcommandbuildsanimagefromthecontainer-infosubfolderandpushesittoDockerHub.Finally,wehavethecreate-kube-podandupdate-kube-podcommand.Asyoucanguess,thecreate-kube-podcommandcreatesourcontrollerandserviceforthefirsttime,whereastheupdate-kube-podcommandsimplyreplacesthecontroller.
Let’sgoaheadandrunthesecommandsandseeourend-to-endworkflow.
$gulpclone
$gulpdocker-build
Thefirsttimethroughyoucanrunthecreate-kube-podcommandasfollows:
$gulpcreate-kube-pod
Thisisallthereistoit.Ifwerunaquickkubectldescribecommandforthenode-gulpservice,wecangettheexternalIPforournewservice.BrowsetothatIPandyou’llseethefamiliarcontainer-infoapplicationrunning.Notethatthehoststartswithnode-gulp,justaswenameditinthepreviouslymentionedpoddefinition.
Figure5.1.ServicelaunchedbyGulpbuild
Onsubsequentupdates,runpullandupdate-kube-pod,asshownhere:
$gulppull
$gulpdocker-build
$gulpupdate-kube-pod
Thisisaverysimpleexample,butyoucanbegintoseehoweasyitistocoordinateyourbuildanddeploymentendtoendwithafewsimplelinesofcode.Next,wewilllookatusingKubernetestoactuallyrunbuildsusingJenkins.
KubernetespluginforJenkinsOnewaywecanuseKubernetesforourCI/CDpipelineistorunourJenkinsbuildslavesinacontainerizedenvironment.Luckily,thereisalreadyaplugin,writtenbyCarlosSanchez,whichallowsyoutorunJenkinsslavesinKubernetes’pods.
PrerequisitesYou’llneedaJenkinsserverhandyforthisnextexample.Ifyoudon’thaveoneyoucanuse,thereisaDockerimageavailableathttps://hub.docker.com/_/jenkins/.
RunningitfromtheDockerCLIisassimpleasthis:
dockerrun--namemyjenkins-p8080:8080-v/var/jenkins_homejenkins
InstallingpluginsLogintoyourJenkinsserver,andfromyourhomedashboard,clickonManageJenkins.Then,selectManagePluginsfromthelist.
Figure5.2.Jenkinsmaindashboard
Thecredentialspluginisrequired,butshouldbeinstalledbydefault.WecanchecktheInstalledtabifindoubt,asshowninthefollowingscreenshot:
Figure5.3.Jenkinsinstalledplugins
Next,wecanclickontheAvailabletab.TheKubernetespluginshouldbelocatedunderClusterManagementandDistributedBuildorMisc(cloud).Therearemanyplugins,soyoucanalternativelysearchforKubernetesonthepage.ChecktheboxforKubernetesPluginandclickonInstallwithoutrestart.
ThiswillinstalltheKubernetesPluginandtheDurableTaskPlugin.
Figure5.4.Plugininstallation
TipIfyouwishtoinstallanonstandardversionorjustliketotinker,youcanoptionallydownloadtheplugins.ThelatestKubernetesandDurableTaskpluginscanbefoundhere:
Kubernetesplugin:https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+PluginDurableTaskplugin:https://wiki.jenkins-ci.org/display/JENKINS/Durable+Task+Plugin
Next,wecanclickontheAdvancedtabandscrolldowntoUploadPlugin.Navigatetothedurable-task.hpifileandclickonUpload.Youshouldseeascreenthatshowsaninstallingprogressbar.Afteraminuteortwo,itwillupdatetoSuccess.
Finally,installthemainKubernetesplugin.Ontheleft-handside,clickonManagePluginsandthentheAdvancedtabonceagain.Thistime,uploadthekubernetes.hpifileandclickonUpload.Afterafewminutes,theinstallationshouldbecomplete.
ConfiguringtheKubernetespluginClickonBacktoDashboardortheJenkinslinkinthetop-leftcorner.Fromthemaindashboardpage,clickontheCredentialslink.Chooseadomainfromthelist;inmycase,IjustusedthedefaultGlobalcredentialsdomain.ClickonAddCredentials.
Figure5.5.Addcredentialsscreen
LeaveKindasUsernamewithpasswordandScopeasGlobal.AddyourKubernetesadmincredentials.Rememberthatyoucanfindthesebyrunningtheconfigcommand:
$kubectlconfigview
GiveitasensibledescriptionandclickonOK.
Nowthatwehaveourcredentialssaved,wecanaddourKubernetesserver.ClickontheJenkinslinkinthetop-leftcornerandthenManageJenkins.Fromthere,selectConfigureSystemandscrollallthewaydowntotheCloudsection.SelectKubernetesfromtheAddanewclouddropdownandaKubernetessectionwillappearasfollows:
Figure5.6.NewKubernetescloudsettings
You’llneedtospecifytheURLforyourmasterintheformofhttps://<MasterIP>/.
Next,choosethecredentialsweaddedfromthedrop-downlist.SinceKubernetesuseaself-signedcertificatebydefault,you’llalsoneedtochecktheDisablehttpscertificatecheckcheckbox.
ClickTestConnectionandifallgoeswell,youshouldseeConnectionsuccessfulappearingnexttothebutton.
TipIfyouareusinganolderversionoftheplugin,youmaynotseetheDisablehttpscertificatecheckcheckbox.Ifthisisthecase,youwillneedtoinstalltheself-signedcertificatedirectlyontheJenkinsMaster.
Finally,wewilladdapodtemplatebychoosingKubernetesPodTemplatefromtheAddPodTemplatedropdownnexttoImages.
Thiswillcreateanothernewsection.Usejenkins-slavefortheNameandLabelssection.Usecsanchez/jenkins-slavefortheDockerImageandleave/home/jenkinsfortheJenkinsSlaverootdirectory.
TipLabelscanbeusedlateroninthebuildsettingstoforcethebuildtousetheKubernetescluster.
Figure5.7.Kubernetespodtemplate
ClickonSaveandyouareallset.NowbuildscanusetheslavesintheKubernetespodwejustcreated.
NoteThereisanothernoteaboutfirewalls.TheJenkinsMasterwillneedtobereachablebytheallmachinesinyourKubernetesclusterasthepodcouldlandanywhere.YoucanfindoutyourportsettingsinJenkinsunderManageJenkinsandConfigureGlobalSecurity.
BonusfunFabric8billsitselfasanintegrationplatform.Itincludesavarietyoflogging,monitoring,andcontinuousdeliverytools.Italsohasaniceconsole,anAPIregistry,anda3Dgamethatletsyoushootatyourpods.It’saverycoolproject,anditactuallyrunsonKubernetes.Refertohttp://fabric8.io/.
It’saneasysinglecommandtosetuponyourKubernetescluster,sorefertohttp://fabric8.io/guide/getStarted/gke.html.
SummaryWelookedattwocontinuousintegrationtoolsthatcanbeusedwithKubernetes.Wedidabriefwalk-throughofdeployingGulp.jstaskonourcluster.WealsolookedatanewplugintointegrateJenkinsbuildslavesintoyourKubernetescluster.YoushouldnowhaveabettersenseofhowKubernetescanintegratewithyourownCI/CDpipeline.
Chapter6.MonitoringandLoggingThischapterwillcovertheusageandcustomizationofbothbuilt-inandthird-partymonitoringtoolsonourKubernetescluster.Wewillcoverhowtousethetoolstomonitorhealthandperformanceofourcluster.Inaddition,wewilllookatbuilt-inlogging,theGoogleCloudLoggingservice,andSysdig.
Thischapterwilldiscussthefollowingtopics:
HowKuberentesusescAdvisor,Heapster,InfluxDB,andGrafanaHowtocustomizethedefaultGrafanadashboardHowFluentDandGrafanaareusedHowtoinstallanduseloggingtoolsHowtoworkwithpopularthird-partytools,suchasStackDriverandSysdig,toextendourmonitoringcapabilities
MonitoringoperationsReal-worldmonitoringgoesfarbeyondcheckingwhetherasystemisupandrunning.Althoughhealthchecks,likethoseyoulearnedinChapter2,Kubernetes–CoreConceptsandConstructs,undertheHealthcheckssection,canhelpusisolateproblemapplications.Operationteamscanbestservethebusinesswhentheycananticipatetheissuesandmitigatethembeforeasystemgoesoffline.
Bestpracticesinmonitoringaretomeasuretheperformanceandusageofcoreresourcesandwatchfortrendsthatstrayfromthenormalbaseline.Containersarenotdifferenthere,andakeycomponenttomanagingourKubernetesclusterishavingaclearviewintoperformanceandavailabilityoftheOS,network,system(CPUandmemory),andstorageresourcesacrossallnodes.
Inthischapter,wewillexamineseveraloptionstomonitorandmeasuretheperformanceandavailabilityofallourclusterresources.Inaddition,wewilllookatafewoptionsforalertingandnotificationswhenirregulartrendsstarttoemerge.
Built-inmonitoringIfyourecallfromChapter1,KubernetesandContainerOperations,wenotedthatournodeswerealreadyrunninganumberofmonitoringservices.Wecanseetheseonceagainbyrunningthegetpodscommandwiththekube-systemnamespacespecifiedasfollows:
$kubectlgetpods--namespace=kube-system
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure6.1.Systempodlisting
Again,weseeavarietyofservices,buthowdoesthisallfittogether?IfyourecalltheNode(formerlyminions)sectionfromChapter2,Kubernetes–CoreConceptsandConstructs,eachnodeisrunningakublet.ThekubletisthemaininterfacefornodestointeractandupdatetheAPIserver.Onesuchupdateisthemetricsofthenoderesources.TheactualreportingoftheresourceusageisperformedbyaprogramnamedcAdvisor.
cAdvisorisanotheropensourceprojectfromGoogle,whichprovidesvariousmetricsoncontainerresourceuse.MetricsincludeCPU,memory,andnetworkstatistics.ThereisnoneedtotellcAdvisoraboutindividualcontainers;itcollectsthemetricsforallcontainersonanodeandreportsthisbacktothekublet,whichinturnreportstoHeapster.
NoteGoogle’sopensourceprojects
GooglehasavarietyofopensourceprojectsrelatedtoKubernetes.Checkthemout,usethem,andevencontributeyourowncode!
cAdvisorandHeapsterarementionedinthefollowingsection:
cAdvisor:https://github.com/google/cadvisorHeapster:https://github.com/kubernetes/heapster
Contribisacatch-allforavarietyofcomponentsthatarenotpartofcoreKubernetes.Itisfoundathttps://github.com/kubernetes/contrib.
LevelDBisakeystorelibrarythatwasusedinthecreationofInfluxDB.Itisfoundathttps://github.com/google/leveldb.
HeapsterisyetanotheropensourceprojectfromGoogle;youmaystarttoseeathemeemerginghere(seetheprecedinginformationbox).Heapsterrunsinacontainerononeof
theminionnodesandaggregatesthedatafromkublet.AsimpleRESTinterfaceisprovidedtoquerythedata.
WhenusingtheGCEsetup,afewadditionalpackagesaresetupforus,whichsavesustimeandgivesusacompletepackagetomonitorourcontainerworkloads.AswecanseefromFigure6.1,thereisanotherpodwithinflux-grafanainthetitle.
InfluxDBisdescribedatit’sofficialwebsiteasfollows1:
Anopen-sourcedistributedtimeseriesdatabasewithnoexternaldependencies.
Itisbasedonakeystorepackage(seethepreviousGoogle’sopensourceprojectsinformationbox)andisperfecttostoreandqueryeventortime-basedstatisticssuchasthoseprovidedbyHeapster.
Finally,wehaveGrafana,whichprovidesadashboardandgraphinginterfaceforthedatastoredinInfluxDB.UsingGrafana,userscancreateacustommonitoringdashboardandgetimmediatevisibilityintothehealthoftheirKubernetesclusterandthereforetheirentirecontainerinfrastructure.
ExploringHeapsterLet’squicklylookattheRESTinterfacebySSH’ingtothenodewiththeHeapsterpod.First,wecanlistthepodstofindtheonerunningHeapsterasfollows:
$kubectlgetpods--namespace=kube-system
Thenameofthepodshouldstartwithmonitoring-heapster.Runadescribecommandtoseewhichnodeitisrunningonasfollows:
$kubectldescribepods/<HeapstermonitoringPod>--namespace=kube-system
Fromtheoutputinthefollowingfigure(Figure6.2),wecanseethatthepodisrunninginkubernetes-minion-merd.AlsonotetheIPforthepod,afewlinesdown,aswewillneedthatinamoment.
Figure6.2.Heapsterpoddetails
Next,wecanSSHtothisboxwiththefamiliargcloudsshcommandasfollows:
$gcloudcompute--project"<YourprojectID>"ssh--zone"<yourgcezone>"
"<kubernetesminionfromdescribe>"
Fromhere,wecanaccesstheHeapsterRESTAPIdirectlyusingthepod’sIPaddress.RememberthatpodIPsareroutablenotonlyinthecontainersbutalsoonthenodesthemselves.TheHeapsterAPIislisteningonport8082,andwecangetafulllistofmetricsat/api/v1/metric-export-schema/.
Let’sseethelistnowbyissuingacurlcommandtothepodIPaddresswesavedfromthedescribecommandasfollows:
$curl-G<HeapsterIPfromdescribe>:8082/api/v1/metric-export-schema/
Wewillseealistingthatisquitelong.Thefirstsectionshowsallthemetricsavailable.Thelasttwosectionslistfieldsbywhichwecanfilterandgroup.Foryourconvenience,I’veaddedthefollowingtablesthatarealittlebiteasiertoread:
Metric Description Unit Type
uptime Thenumberofmillisecondssincethecontainerwasstarted ms cumulative
cpu/usage CumulativeCPUusageonallcores ns cumulative
cpu/limit CPUlimitinmillicores - gauge
memory/usage Totalmemoryusage bytes gauge
memory/working_set Totalworkingsetusage.Workingsetisthememorybeingusedandnoteasilydroppedbythekernel bytes gauge
memory/limit Memorylimit bytes gauge
memory/page_faults Thenumberofpagefaults - cumulative
memory/major_page_faults Thenumberofmajorpagefaults - cumulative
network/rx Cumulativenumberofbytesreceivedoverthenetwork bytes cumulative
network/rx_errors Cumulativenumberoferrorswhilereceivingoverthenetwork - cumulative
network/tx Cumulativenumberofbytessentoverthenetwork bytes cumulative
network/tx_errors Cumulativenumberoferrorswhilesendingoverthenetwork - cumulative
filesystem/usage Totalnumberofbytesconsumedonafilesystem bytes gauge
filesystem/limit Thetotalsizeoffilesysteminbytes bytes gauge
Table6.1.AvailableHeapstermetrics
Field Description Labeltype
hostname Thehostnamewherethecontainerran Common
host_id Anidentifierspecifictoahost,whichissetbycloudprovideroruser Common
container_name Theuser-providednameofthecontainerorfullcontainernameforsystemcontainers Common
pod_name Thenameofthepod Pod
pod_id TheuniqueIDofthepod Pod
pod_namespace Thenamespaceofthepod Pod
namespace_id TheuniqueIDofthenamespaceofthepod Pod
labels Acomma-separatedlistofuser-providedlabels Pod
Table6.2.AvailableHeapsterfields
CustomizingourdashboardsNowthatwehavethefields,wecanhavesomefun.RecalltheGrafanapagewelookedatinChapter1,KubernetesandContainerOperations.Let’spullthatupagainbygoingourcluster’smonitoringURL.Notethatyoumayneedtologinwithyourclustercredentials.Refertothefollowingformatofthelinkyouneedtouse:https://<yourmasterIP>/api/v1/proxy/namespaces/kube-
system/services/monitoring-grafana
We’llseethedefaultKubernetesdashboard,andnowwecanaddourownstatisticstotheboard.ScrollallthewaytothebottomandclickonAddaRow.Thisshouldcreateaspaceforanewrowandpresentagreentabontheleft-handsideofthescreen.
Let’sstartbyaddingaviewintothefilesystemusageforeachnode(minion).ClickonthegreentabtoexpandandthenchooseAddPanelandthengraph.Anemptygraphshouldappearonthescreen.Ifweclickonthegraphwhereitsaysnotitle(clickhere),acontextmenuwillappear.WecanthenclickonEdit,andwe’llbeabletosetupthequeryforourcustomdashboardpanel.
TheseriesboxallowsustouseanyoftheHeapstermetricswesawintheprevioustables.Intheseriesbox,enterfilesystem/usage_bytes_gaugeandselecttomax(value).Then,enter5sforgroupbytimeandhostnameintheboxmarkedcolumnnexttotheplussign,asshowninthefollowingscreenshot:
Figure6.3.Heapsterpoddetails
Next,let’sclickontheAxes&Gridtab,sothatwecansettheunitsandlegend.UnderLeftYAxis,setFormattobytesandLabeltoDiskSpaceUsed.UnderRightYAxis,setFormattonone.Next,underLegendstyles,makesuretocheckShowvalues,andtable.ALegendValuessectionshouldappear,andwecanchecktheboxforMaxhere.
Now,let’squicklygototheGeneraltabandchooseatitle.Inmycase,InamedmineFilesystemDiskUsagebyNode(max).
Wedon’twanttolosethisnicenewgraphwe’vecreated,solet’sclickonthesaveiconinthetoprightcorner.Itlookslikeafloppydisk(youcandoaGoogleimagesearchifyoudon’tknowwhatthoseare).
Afterweclickonthesaveicon,adropdownwillappearwithseveraloptions.Thefirstitemshouldhavethedefaultdashboardtitle,whichisKubernetesCluster!atthetimeofthiswriting.Also,clickonthesaveiconontheright-handside.
Itshouldtakeusbacktothemaindashboardwherewewillseeournewgraphatthebottom.Let’saddanotherpaneltothatrow.AgainusethegreentabandthenselectAddPanelandsinglestat.Onceagain,anemptypanelwillappear,andwecanclickitwhereitsaysnotitle(clickhere)forthecontextmenuandthenclickonEdit.
Let’ssay,wewanttowatchaparticularnodeandmonitormemoryusage.Wecaneasilydothisbysettingthewhereclauseinourquery.First,choosenetwork/rx_bytes_cumulativeforseriesandmean(value)forselect.Then,wecanspecifythehostnameinthewhereclausewithhostname=kubernetes-minion-35aoandgroupbytimeto5s.(Useoneofyourownhostnamesifyouarefollowingalong).
Figure6.4.Singlestatoptions
UndertheOptionstab,makesurethatUnitformatissettobytesandchecktheSparklineboxunderSparklines.Thesparklinegivesusaquickhistoryviewoftherecentvariationinthevalue.WecanusetheBackgroundmodetotakeuptheentirebackground;bydefault,itusestheareabelowthevalue.
TipUnderColoring,wecanoptionallychecktheValuebox.AThresholdsandColorssectionwillappear.Thiswillallowustochoosedifferentcolorsforthevaluebasedonthethresholdtierwespecify.Notethatanunformattedversionofthenumbermustbeusedforthresholdvalues.
Now,let’sgobacktotheGeneraltabandchooseatitleasNetworkbytesreceived(Node35ao).Onceagain,let’ssaveourworkandreturntothedashboard.Weshouldnowhavearowthatlookslikethefollowingfigure(Figure6.5):
Figure6.5.Customdashboardpanels
Athirdtypeofpanelwedidn’tcoveristext.It’sprettystraightforwardandallowsustoplaceablockoftextonthedashboardusingHTML,markdown,orjustplaintext.
Aswecansee,itisprettyeasytobuildacustomdashboardandmonitorthehealthofourclusterataglance.
FluentDandGoogleCloudLoggingLookingbackatFigure6.1,youmayhavenotedanumberofpodsstartingwiththewordsfluentd-cloud-logging-kubernetes.ThesepodsappearwhenusingtheGCEproviderforyourK8scluster.ApodlikethisexistsoneverynodeinourclusteranditssolepurposetohandletheprocessingofKuberneteslogs.
IfwelogintoourGoogleCloudPlatformaccount,wecanseesomeofthelogsprocessedthere.Simplynavigatetoourprojectpage,andontheleft,underMonitoring,clickonLogs.(Ifyouareusingthebetaconsole,itwillbeunderOperationsandthenLogging.)Thiswilltakeustoaloglistingpagewithanumberofdrop-downmenusonthetop.Ifthisisyourfirsttimevisitingthepage,youshouldseealogselectiondropdownwiththevalueAllLogs.
Inthisdropdown,we’llseeanumberofKubernetes-relatedentries,includingkubletandsomeentrieswithkubernetesatthebeginningofthelabel.Wecanalsofilterbydateandusetheplaybuttontowatcheventsstreaminlive.
Figure6.6.TheGoogleCloudLoggingfilter
FluentDNowweknowthatthefluentd-cloud-logging-kubernetespodsaresendingthedatatotheGoogleCloud,butwhydoweneedFluentD?Simplyput,FluentDisacollector.Itcanbeconfiguredtohavemultiplesourcestocollectandtaglogs,whicharethensenttovariousoutputpointsforanalysis,alerting,orarchiving.Wecaneventransformdatausingpluginsbeforeitispassedontoitsdestination.
NotallprovidersetupshaveFluentDinstalledbydefault,butitisoneoftherecommendedapproachestogiveusgreaterflexibilityforfuturemonitoringoperations.TheAWSKubernetessetupalsousesFluentD,butinsteadforwardseventstoElasticsearch.
NoteExploringFluentD
IfyouarecuriousabouttheinnerworkingsoftheFluentDsetuporjustwanttocustomizethelogcollection,wecanexplorequiteeasilyusingthekubectlexeccommand.
First,let’sseeifwecanfindtheFluentDconfigfile:
$kubectlexecfluentd-cloud-logging-kubernetes-minion-35ao--
namespace=kube-system—ls/etc
Usually,wewouldlookintheetcfolderforata-agentorfluentsubfolder.However,ifwerunanlscommand,we’llseethatthereisnota-agentorfluentsubfolder,butthereisagoogle-fluentdsubfolder:
$kubectlexecfluentd-cloud-logging-kubernetes-minion-35ao--
namespace=kube-system—ls/etc/google-fluentd/
Whilesearchinginthisdirectory,weshouldseeagoogle-fluentd.conffile.Wecanviewthatfilewithasimplecatcommandasfollows:
$kubectlexecfluentd-cloud-logging-kubernetes-minion-35ao--
namespace=kube-system—cat/etc/google-fluentd/google-fluentd.conf
Weshouldseeanumberofsourcesincludingthekublet,containers,etcd,andvariousotherKubernetescomponents.
Notethatwhilewecanmakechangeshere,rememberthatisarunningcontainerandourchangeswon’tbesavedifthepoddiesorisrestarted.Ifwereallywanttocustomize,it’sbesttousethiscontainerasabaseandbuildanewcontainerthatwecanpushtoarepositoryforlateruse.
MaturingourmonitoringoperationsWhileGrafanagivesusagreatstarttomonitorourcontaineroperations,itisstillaworkinprogress.Intherealworldofoperations,havingacompletedashboardviewisgreatonceweknowthereisaproblem.However,ineverydayscenarios,we’dprefertobeproactiveandactuallyreceivenotificationswhenissuesarise.Thiskindofalertingcapabilityisamusttokeeptheoperationsteamaheadofthecurveandoutofreactivemode.
Therearemanysolutionsavailableinthisspace,andwewilltakealookattwoinparticular:GCEmonitoring(StackDriver)andSysdig.
GCE(StackDriver)StackDriverisagreatplacetostartforinfrastructureinthepubliccloud.ItisactuallyownedbyGoogle,soit’sintegratedastheGoogleCloudPlatformmonitoringservice.Beforeyourlock-inalarmbellsstartringing,StackDriveralsohassolidintegrationwithAWS.Inaddition,StackDriverhasalertingcapabilitywithsupportfornotificationtoavarietyofplatformsandwebhooksforanythingelse.
Sign-upforGCEmonitoringIntheGCEconsole,undertheMonitoringsection,thereisaDashboard&alertslink(orjusttheMonitoringlinkunderOperationsinthebetaconsole).Thiswillopenanewwindowwherewecanenablethemonitoringfunctionality(stillinbetaatthetimeofthiswriting).Onceenabled,we’llbetakentoascreenthathasinstallinstructionsforeachoperatingsystem(thiswillbeunderSetupandmonitoranendpointinthebetaconsole).ItwillalsoshowyourAPIkey,whichisnecessaryfortheinstallation.
TipIfyouwanttodosomethingsimilarinAWS,youcansimplysignupforaccountatStackDriver’smainwebsite:
https://www.stackdriver.com/
Installationinstructionsforthemorecommoninstallscanbefoundathttp://support.stackdriver.com/customer/en/portal/articles/1491726-what-is-the-stackdriver-agent.
WecanfindourAPIkeyunderAccountSettingsandAPIKeys.
ClickonGotoMonitoringtoproceed.We’llbetakentothemaindashboardpagewherewewillseesomebasicstatisticsonournodeinthecluster.IfwegotoInfrastructureandthenInstances,we’llbetakentoapagewithallournodeslisted.Byclickingontheindividualnode,wecanagainseesomebasicinformationevenwithoutanagentinstalled.
ConfiguredetailedmonitoringAswehaveseen,simplyenablingmonitoringwillgiveusbasicstatsforallourmachinesinGCE,butifwewanttogetdetailedresults,we’llneedtheagentoneachnode.Let’swalkthroughaninstall.
Asbefore,we’llwanttousethegcloudcomputesshcommandtogetashellononeofourminionnodes.Then,wecandownloadandinstalltheagent.IfyouneedyourAPIkey,thiscanbefoundbyclickingyourusericoninthetop-rightcornerandgoingtoAccountSettingsandthenonthenextpage,clickonAPIKeysinthemenuontheleft:
$curl-Ohttps://repo.stackdriver.com/stack-install.sh
$sudobashstack-install.sh--api-key=<API-KEY>
Ifeverythinggoeswell,weshouldhaveanagentinstalledandready.Wecancheckthisbyrunningtheinfocommandasfollows:
$/opt/stackdriver/stack-configinfo
WeshouldseealotofinformationintheformofJSONonthescreen.Afteryoufinish,givetheagentafewminutesbeforegoingbacktoInfrastructureandInstances.
Onthesummaryinstancepage,we’llnotethatallourGCEinstancesareshowingCPUusage.However,onlytheinstancewiththeagentinstalledwillshowtheMemoryusagestatistic.
Clickonthenodewiththeagentinstalled,sowecaninspectitabitfurther.Ifweclickoneachoneandlookatthedetailspage,weshouldnotethattheinstancewiththeagentinstalledhasalotmoreinformation.AlthoughallinstancesreportCPUusage,DiskI/O,andnetworktraffic,theinstancewiththeagenthasmuchmore.
Figure6.7.GoogleCloudMonitoringwithagentinstalled
InFigure6.7,wecanseeavarietyofadditionalchartsincludingOpenTCPconnectionsandprocessesaswellasCPUsteal(notpictured).Wealsohavebettervisibilityintothemachinedetailssuchasnetworkinterfaces,filesystems,andoperatingsysteminformation.
Nowthatweseehowmuchinformationisavailable,wecaninstalltheagentontheremaininginstances.YoumayalsowishtoinstallanagentonthemasterasitisacriticalpieceofyourKubernetesinfrastructure.
Alerts
Next,wecanlookatthealertingpoliciesavailableaspartofthemonitoringservice.Fromtheinstancedetailspage,clickontheCreateAlertingPolicybuttonintheIncidentssectionatthetopofthepage.
We’llnamethepolicyasExcessiveCPULoadandsetametricthreshold.Underthesection,intheMetricThresholdarea,clickonNextandthenintheTARGETsection,setResourceTypetoInstances.Then,setAppliesTotoGroupandkubernetes.LeaveConditionTriggersIfsettoAnyMemberViolates.
ClickonNextandleaveIFMETRICasCPU(agent)andCONDITIONasabove.NowsetTHRESHOLD(PERCENT)to80andleavethetimeunderFORto5minutes.ClickonSaveCondition.
Figure6.8.GoogleCloudMonitoringalertpolicy
Finally,wewilladdanotification.Underthatsection,leaveMethodasEmailandclickonAddNotification.Enteryoure-mailaddressandthenclickonSavePolicy.
NowwhenevertheCPUfromoneofourinstancesgoesabove80percent,wewillreceive
ane-mailnotification.Ifweeverneedtoreviewourpolicies,wecanfindthemundertheAlertingdropdownandPoliciesOverviewatthemenuonthetopofthescreen.
BeyondsystemmonitoringwithSysdigMonitoringourcloudsystemsisagreatstart,butwhataboutvisibilityintothecontainersthemselves?Althoughthereareavarietyofcloudmonitoringandvisibilitytools,Sysdigstandsoutforitsabilitytodivedeepnotonlyintosystemoperationsbutspecificallycontainers.
Sysdigisopensourceandisbilledasauniversalsystemvisibilitytoolwithnativesupportforcontainers2.Itisacommand-linetool,whichprovidesinsightintotheareaswe’velookedatearliersuchasstorage,network,andsystemprocesses.Whatsetsitapartisthelevelofdetailandvisibilityitoffersfortheseprocessandsystemactivities.Furthermore,ithasnativesupportforcontainers,whichgivesusafullpictureofourcontaineroperations.Thisisahighlyrecommendedtoolforyourcontaineroperationsarsenal.Theirmainwebsiteishttp://www.sysdig.org/.
SysdigCloudWewilltakealookattheSysdigtoolandsomeoftheusefulcommand-line-basedUIsinamoment.However,theteamatSysdighasalsobuiltacommercialproduct,namedSysdigCloud,whichprovidestheadvanceddashboard,alerting,andnotificationserviceswediscussedearlierinthechapter.Also,thedifferentiatorherehashighvisibilityintocontainers,includingsomenicevisualizationsofourapplicationtopology.
NoteIfyou’dratherskiptheSysdigCloudsectionandjusttryoutthecommand-linetool,simplyskiptotheSysdigcommandlinesectionlaterinthischapter.
Ifyouhavenotdonesoalready,signupforSysdigCloudathttp://www.sysdigcloud.com.
Afteractivatingandlogginginforthefirsttime,we’llbetakentoawelcomepage.ClickingonNext,weareshownapagewithvariousoptionstoinstallthesysdigagents.Forourexampleenvironment,wewilluseaLinuxagent.TheNextbuttonwillbedisableduntilweinstallatleastoneagent.Thepageshouldshowthefollowingcommandwithouraccesskeyfilledin.
curl-shttps://s3.amazonaws.com/download.draios.com/stable/install-agent|
sudobash-s—--access_key<YourAccessKey>
We’llneedtoSSHintoourmasterandeachnodetoruntheinstaller.ItwilltakeafewminutestoinstallseveralpackagesandthensetuptheconnectiontotheSysdigCloud.
Afterourfirstinstallcompletes,thepageshouldupdatewiththetextYouhaveoneagentconnected!andtheNextbuttonwillbecomeactive.GoaheadandinstalltherestoftheagentsandthencomebacktothispageandclickonNext.
WecanskiptheAWSsetupfornowandthenclickonLet’sGetStartedonthefinalscreen.
We’llbetakentothemainsysdigclouddashboardscreen.kubernetes-masterandourvariousminionnodesshouldappearundertheExploretab.Weshouldseesomething
similartoFigure6.9withourclustermasterandallfourminionnodes(orthenodeswehavealreadyinstalledagentson).
Figure6.9.SysdigCloudExplorepage
ThispageshowsusatableviewandthelinksontheleftletusexploresomekeymetricsforCPU,memory,networking,andsoon.Althoughthisisagreatstart,thedetailedviewswillgiveusamuchdeeperlookateachnode.
Detailedviews
Let’stakealookattheseviews.Selectkubernetes-masterandthenscrolldowntothedetailsectionthatappearsbelow.Bydefault,weshouldseetheSystem:OverviewbyProcessview(Ifit’snotselected,justclickonitinthelistontheleft.)Ifthechartishardtoread,simplyusethemaximizeiconinthetop-leftcornerofeachgraphforalargerview.
Thereareavarietyofinterestingviewstoexplore.Justtocalloutafewothers,Application:HTTPandSystem:Overviewbycontainergiveussomegreatchartsforinspection.Inthelaterview,wecanseestatsforCPU,memory,network,andfileusagebycontainer.
Topologyviews
Inaddition,therearethreetopologyviewsatthebottom.Theseviewsareperfectforhelpingusunderstandhowourapplicationiscommunicating.ClickonTopology:NetworkTrafficandwaitafewsecondsfortheviewtofullypopulate.ItshouldlooksimilartoFigure6.10:
Figure6.10.SysdigCloudnetworktopologyview
Wenotetheviewmapsouttheflowofcommunicationbetweentheminionnodesandthemasterinthecluster.Ontheright-handside,theremaybeconnectionstoserverswitha1e100.netnameandalso169.254.169.254,whicharebothpartofGoogleinfrastructure.
Youmayalsonotea+symbolinthetopcornerofthenodeboxes.Clickonthatinkubernetes-masterandusethezoomtoolsatthetopoftheviewareatozoomintothedetails,asyouseeinFigure6.11:
Figure6.11.TheSysdigCloudnetworktopologydetailedview
NotethatwecannowseeallthecomponentsofKubernetesrunninginsidethemaster.Wecanseehowthevariouscomponentsworktogether.Wewillseekubectlandthekubletprocessrunning,aswellasanumberofboxeswiththeDockerwhale,whichindicatethattheyarecontainers.Ifwezoominandusetheplusicon,wewillseethatthesearethecontainersforcoreKubernetesprocess,aswesawintheservicesrunningonthemastersectioninChapter1,KubernetesandContainerOperations.
Also,ifwepanovertotheminion,wecanalsoseekublet,whichinitiatescommunication,andfollowitallthewaythroughthekube-apiservercontainerinthemaster.
WecanevenseetheinstanceprobingforGCEmetadataon169.254.169.254.Thisviewisgreatinordertogetamentalpictureofhowourinfrastructureandunderlyingcontainersaretalkingtooneanother.
Metrics
Next,let’sswitchovertotheMetricstabintheleft-handmenunexttoViews.Here,there
arealsoavarietyofhelpfulviews.
Let’slookatcapacity.estimated.request.total.count(avg)underSystem.Thisviewshowsusanestimateofhowmanyrequestsanodeiscapableofhandlingwhenfullyloaded.Thiscanbereallyusefulforinfrastructureplanning.
Figure6.12.SysdigCloudcapacityestimateview
AlertingNowthatwehaveallthisgreatinformation,let’screatesomenotifications.Scrollbackuptothetopofthepageandfindthebelliconnexttooneofyourminionentries.ThiswillopenaNewAlertdialog.Here,wecansetmanualalertssimilartowhatwedidearlierinthechapter.However,thereisalsotheoptiontouseBaselinesandHostcomparison.
UsingtheBaselineoptionisextremelyhelpfulasSysdigwillwatchthehistoricalpatternsofthenodeandalertuswheneveroneofthemetricsstraysoutsidetheexpectedmetricthresholds.Nomanualsettingsarerequired,sothiscanreallysavetimeforthenotificationsetupandhelpouroperationsteamtobeproactivebeforeissuesarise.Referto
thefollowingimage:
Figure6.13.SysdigCloudnewalert
TheHostComparisonoptionisalsoagreathelpasitallowsustocomparemetricswithotherhostsandalertwheneveronehosthasametricthatdifferssignificantlyfromthegroup.Agreatusecaseforthisismonitoringresourceusageacrossminionnodestoensurethatourschedulingconstraintsarenotcreatingabottlenecksomewhereinthecluster.
Youcanchoosewhicheveroptionyoulike,giveitanameanddescriptionandchooseanotificationmethod.Sysdigsupportse-mail,SNS(shortforSimpleNotificationService),andPagerDutyasnotificationmethods.Onceyouhaveeverythingset,justclickonCreateandyouwillstarttoreceivealertsasissuescomeup.
KubernetessupportAnexcitingnewfeaturethathasbeenrecentlyreleasedissupportforintegratingdirectlywiththeKubernetesAPI.TheagentsmakecallstoK8ssothatitisawareofmetadataandthevariousconstructs,suchaspodsandRCs.
WecancheckthisouteasilyonthemaindashboardbyclickingthegeariconnexttothewordShowonthetopbar.Weshouldseesomefilteroptionsasinthefollowingfigure(Figure6.14).ClickontheApplybuttonnexttoLogicalAppsHierarchy-Kubernetes.Thiswillsetanumberoffiltersthatorganizesourlistinorderofnamespace,RC,pods,andfinallycontainerID.
Figure6.14.SysdigCloudKubernetesfilters
Wecanthenselectadefaultnamespacefromthelistandusethedetailviewslater,aswedidbefore.ByselectingtheTopology:NetworkTrafficview,wecandrillintothenamespaceandgetavisualforeachRCandthepodsrunningwithin(seeFigure6.15):
Figure6.15.SysdigCloudKubernetes-awaretopologyview
TheSysdigcommandlineWhetheryouonlyusetheopensourcetooloryouaretryingoutthefullSysdigCloudpackage,thecommand-lineutilityisagreatcompaniontohavetotrackdownissuesorgetadeeperunderstandingofyoursystem.
Inthecoretool,thereisthemainsysdigutilityandalsoacommand-linestyleUInamedcsysdig.Let’stakealookatafewusefulcommands.
We’llneedtoSSHtothemasteroroneoftheminionnodeswhereweinstalledtheSysdigCloudagents.It’sasinglecommandtoinstalltheCLItoolsasfollows:
$curl-shttps://s3.amazonaws.com/download.draios.com/stable/install-
sysdig|sudobash
NoteYoucanfindinstructionsforotherOSesathttp://www.sysdig.org/install/.
First,wecanseetheprocesswiththemostnetworkactivitybyissuingthefollowingcommand:
$sudosysdig-pc-ctopprocs_net
Thefollowingscreenshotistheresultoftheprecedingcommand:
Figure6.16.ASysdigtopprocessbynetworkactivity
Thisisaninteractiveviewthatwillshowusatopprocessintermsofnetworkactivity.Also,thereareaplethoraofcommandstousewithsysdig.Afewotherusefulcommandstotryoutincludethefollowing:
$sudosysdig-pc-ctopprocs_cpu
$sudosysdig-pc-ctopprocs_file
$sudosysdig-pc-ctopprocs_cpucontainer.name=<ContainerNameNOTID>
NoteMoreexamplescanbefoundathttp://www.sysdig.org/wiki/sysdig-examples/.
Thecsysdigcommand-lineUI
Becauseweareinashellononeofournodesdoesn’tmeanwecan’thaveaUI.CsysdigisacustomizableUItoexploreallthemetricsandinsightthatSysdigprovides.Simplytypecsysdigattheprompt:
$csysdig
Afterenteringcsysdig,weseeareal-timelistingofallprocessesonthemachine.Atthebottomofthescreen,you’llnoteamenuwithvariousoptions.ClickonViewsorF2ifyoulovetouseyourkeyboard.Ontheleft-handmenu,thereareavarietyofoptions,butwe’lllookatthreads.Double-clicktoselectThreads.
Wecanseeallthethreadscurrentlyrunningonthesystemandsomeinformationabouttheresourceusage.Bydefault,weseeabiglistthatisupdatingoften.IfweclickontheFilter,F4forthemousechallenged,wecanslimdownthelist.
Typekube-apiserver,ifyouareonthemaster,orkube-proxy,ifyouareona(minion)node,inthefilterboxandpressenter.Theviewnowfiltersforonlythethreadsinthatcommand.
Figure6.17.Csysdigthreads
Ifwewanttoinspectalittlefurther,wecansimplyselectoneofthethreadsinthelistandclickonDigorF6.Nowweseeadetaillistingofsystemcallsfromthecommandinrealtime.Thiscanbeareallyusefultooltogaindeepinsightintothecontainersand
processingrunningonourcluster.
PressBackorthebackspacekeytogobacktothepreviousscreen.Then,gotoViewsoncemore.Thistime,wewilllookattheContainersview.Onceagain,wecanfilterandalsousetheDigviewtogetmorein-depthvisibilityintowhatishappeningatasystemcalllevel.
AnothermenuitemyoumightnotehereisActions,whichisavailableinthenewestrelease.Thesefeaturesallowustogofromprocessmonitoringtoactionandresponse.Itgivesustheabilitytoperformavarietyofactionsfromthevariousprocessviewsincsysdig.Forexample,thecontainerviewhasactionstodropintoabashshell,killcontainers,inspectlogs,andmore.It’sworthgettingtoknowthevariousactionsandhotkeysandevenaddyouowncustomhotkeysforcommonoperations.
SummaryWetookaquicklookatmonitoringandloggingwithKubernetes.YoushouldnowbefamiliarwithhowKubernetesusescAdvisorandHeapstertocollectmetricsonalltheresourcesinagivencluster.Furthermore,wesawhowKubernetessavesustimebyprovidingInfluxDBandGrafanasetupandconfiguredoutofthebox.Dashboardsareeasilycustomizableforoureverydayoperationalneeds.
Inaddition,welookedatthebuilt-inloggingcapabilitieswithFluentDandtheGoogleCloudLoggingservice.Also,Kubernetesgivesusgreattimesavingsbysettingupthebasicsforus.
Finally,youlearnedaboutthevariousthird-partyoptionsavailabletomonitorourcontainersandclusters.Usingthesetoolswillallowustogainevenmoreinsightintothehealthandstatusofourapplications.Allthesetoolscombinetogiveusasolidtoolsettomanageday-to-dayoperations.
Footnotes1http://stackdriver.com/
2http://www.sysdig.org/wiki/
Chapter7.OCI,CNCF,CoreOS,andTectonicThefirsthalfofthischapterwillcoverhowopenstandardsencourageadiverseecosystemofcontainerimplementations.We’lllookattheOpenContainerInitiativeanditsmissiontoprovideanopencontainerspecificationaswell.ThesecondhalfofthischapterwillcoverCoreOSanditsadvantagesasahostOS,includingperformanceandsupportforvariouscontainerimplementations.Also,we’lltakeabrieflookattheTectonicenterpriseofferingfromCoreOS.
Thischapterwilldiscussthefollowingtopics:
WhystandardsmatterTheOpenContainerInitiativeandCloudNativeComputingFoundationContainerspecificationsversusimplementationsCoreOSanditsadvantagesTectonic
TheimportanceofstandardsOverthepasttwoyears,containerizationtechnologyhashadatremendousgrowthinpopularity.WhileDockerhasbeenatthecenterofthisecosystem,thereisanincreasednumberofplayersinthecontainerspace.ThereisalreadyanumberofalternativestothecontainerizationandDockerimplementationitself(rkt,Garden,LXD,andsoon).Inaddition,thereisarichecosystemofthird-partytoolsthatenhanceandcomplimentyourcontainerinfrastructure.Kuberneteslandssquarelyontheorchestrationsideofthisecosystem,butthebottomlineisthatallthesetoolsformthebasistobuildcloudnativeapplications.
Aswementionedintheverybeginningofthebook,oneofthemostattractivethingsaboutcontainersistheirabilitytopackageourapplicationfordeploymentacrossvariousenvironments(thatis,development,testing,production)andvariousinfrastructureproviders(GCP,AWS,On-Premise,andsoon).
Totrulysupportthistypeofdeploymentagility,weneednotonlythecontainerthemselvestohaveacommonplatform,butalsotheunderlyingspecificationstofollowacommonsetofgroundrules.Thiswillallowforimplementationsthatarebothflexibleandhighlyspecialized.Forexample,someworkloadsmayneedtoberunonahighlysecureimplementation.Toprovidethis,theimplementationwillhavetomakemoreintentionaldecisionsaboutsomeaspectsofimplementation.Ineithercase,wewillhavemoreagilityandfreedomifourcontainersarebuiltonsomecommonstructuresthatallimplementationsagreeonandsupport.
OpenContainerInitiativeOneofthefirstinitiativestogainwidespreadindustryengagementistheOpenContainerInitiative(OCI).AmongtheindustrycollaboratorsareDocker,RedHat,VMware,IBM,Google,AWS,andmanymorelistedontheOCIwebsite,thatis,https://www.opencontainers.org/.
ThepurposeoftheOCIistosplitimplementations,suchasDockerandRocket,fromastandardspecificationfortheformatandruntimeofcontainerizedworkloads.Bytheirownterms,thegoaloftheOCIspecificationhasthreetenets1:
Creatingaformalspecificationforcontainerimageformatsandruntime,whichwillallowacompliantcontainertobeportableacrossallmajor,compliantoperatingsystemsandplatformswithoutartificialtechnicalbarriers.
Accepting,maintainingandadvancingtheprojectsassociatedwiththesestandards(the“Projects”).Itwilllooktoagreeonastandardsetofcontaineractions(start,exec,pause,….)aswellasruntimeenvironmentassociatedwithcontainerruntime.
Harmonizingtheabove-referencedstandardwithotherproposedstandards,includingtheappcspecification
CloudNativeComputingFoundationAsecondinitiativethatalsohasawidespreadindustryacceptanceistheCloudNativeComputingFoundation(CNCF).Whilestillfocusedoncontainerizedworkloads,theCNCFoperatesabithigherupthestackatanapplicationdesignlevel.Thepurposeistoprovideastandardsetoftoolsandtechnologiestobuild,operate,andorchestratecloudnativeapplicationstacks.Cloudhasgivenusaccesstoavarietyofnewtechnologiesandpracticesthatcanimproveandevolveourclassicsoftwaredesigns.Thisisalsoparticularlyfocusedatthenewparadigmofmicroservice-orienteddevelopment.
AsafoundingparticipantinCNCF,GooglehasdonatedtheKubernetesopensourceprojectasthefirststep.Thegoalwillbetoincreaseinteroperabilityintheecosystemandsupportbetterintegrationwithprojects,startingoffwithMesos.
NoteFormoreinformationonCNCFrefer:https://cncf.io/
StandardcontainerspecificationAcoreresultoftheOCIeffortisthecreationanddevelopmentoftheoverarchingcontainerspecification.Thespecificationhasfivecoreprinciplesforallcontainerstofollow,whichIwillbrieflyparaphrase2:
Itmusthavestandardoperationstocreate,start,andstopcontainersacrossallimplementations.Itmustbecontent-agnostic,whichmeansthattypeofapplicationinsidethecontainerdoesnotalterthestandardoperationorpublishingofthecontaineritself.Thecontainermustbeinfrastructure-agnosticaswell.Portabilityisparamount;therefore,thecontainersmustbeabletooperatejustaseasilyinGCEasinyourcompanydatacenteroronadeveloper’slaptop.Acontainermustalsobedesignedforautomation,whichallowsustoautomateacrossthebuild,updating,anddeploymentpipelines.Whilethisruleisabitvague,thecontainerimplementationshouldnotrequireonerousmanualstepsforcreationandrelease.Finally,theimplementationmustsupportindustrial-gradedelivery.Onceagain,speakingtothebuildanddeploymentpipelinesandrequiringastreamlinedefficiencytotheportabilityandtransitofthecontainersbetweeninfrastructureanddeploymenttiers.
Thespecificationalsodefinescoreprinciplesforcontainerformatsandruntimes.YoucanreadmoreaboutthespecificationsontheGitHubprojectat:
https://github.com/opencontainers/specs
Whilethecorespecificationcanbeabitabstract,therunCimplementationisaconcreteexampleoftheOCIspecsintheformofacontainerruntimeandimageformat.Also,youcanreadmoreofthetechnicaldetailsonGitHubathttps://github.com/opencontainers/runc.
runCisthebackingformatandruntimeforavarietyofpopularcontainertools.ItwasdonatedtoOCIbyDockerandwascreatedfromthesameplumbingworkusedintheDockerplatform.Sinceitsrelease,ithashadawelcomeuptakebynumerousprojects.
EventhepopularOpenSourcePaaS,CloudFounrdyannouncedthatitwilluserunCinGarden.GardenprovidesthecontainerizationplumbingforDeigo,whichactsasanorchestrationlayersimilartoKubernetes.
rktwasoriginallybasedontheappcspecification.appcwasactuallyanearlierattemptbythefolksatCoreOStoformacommonspecificationaroundcontainerization.NowthatCoreOSisparticipatinginOCI,theyareworkingtohelpmergetheappcspecificationintoOCI;itshouldresultinahigherlevelofcompatibilityacrossthecontainerecosystem.
CoreOSWhilethespecificationsprovideusacommonground,therearealsosometrendsevolvingaroundthechoiceofOSforourcontainers.Thereareseveraltailor-fitOSesthatarebeingdevelopedspecificallytoruncontainerworkloads.Althoughimplementationsvary,theyallhavesimilarcharacteristics.Focusonasliminstallationbase,atomicOSupdating,andsignedapplicationsforefficientandsecureoperations.
OneOSthatisgainingpopularityisCoreOS.CoreOSoffersmajorbenefitsforbothsecurityandresourceutilization.Itprovidesthelaterbyremovingpackagedependenciescompletelyfrompicture.Instead,CoreOSrunsallapplicationsandservicesincontainers.Byprovidingonlyasmallsetofservicesrequiredtosupportrunningcontainersandbypassingtheneedforhypervisorusage,CoreOSletsususealargerportionoftheresourcepooltorunourcontainerizedapplications.Thisallowsuserstogainhigherperformancefromtheirinfrastructureandbettercontainertonode(server)usageratios.
NoteMorecontainerOSes
Thereareseveralothercontainer-optimizedOSesthathaveemergedrecently.
RedHatEnterpriseLinuxAtomicHostfocusesonsecuritywithSELinuxenabledbydefaultand“Atomic”updatestotheOSsimilartowhatwesawwithCoreOS.Refertothefollowinglink:
https://access.redhat.com/articles/rhel-atomic-getting-started
UbuntuSnappyalsocapitalizesontheefficiencyandsecuritygainsofseparatingtheOScomponentsfromtheframeworksandapplications.Usingapplicationimagesandverificationsignatures,wegetanefficientUbuntu-basedOSforourcontainerworkloads:
http://www.ubuntu.com/cloud/tools/snappy
VMwarePhotonisanotherlightweightcontainerOSthatisoptimizedspecificallyforvSphereandtheVMwareplatform.ItrunsDocker,rkt,andGardenandalsohassomeexperimentalversionsyoucanrunonthepopularpubliccloudofferings.Refertothefollowinglink:
https://vmware.github.io/photon/
Usingtheisolatednatureofcontainers,weincreasereliabilityanddecreasethecomplexityofupdatesforeachapplication.Nowapplicationscanbeupdatedalongwithsupportinglibrarieswheneveranewcontainerreleaseisready.
Figure7.1.CoreOSupdates
Finally,CoreOShassomeaddedadvantagesintherealmofsecurity.Forstarters,theOScanbeupdatedasonewholeunitinsteadofbyindividualpackages(refertoFigure7.1).Thisavoidsmanyissuesthatarisefrompartialupdates.Toachievethis,CoreOSusestwopartitions:oneastheactiveOSpartitionandasecondarytoreceiveafullupdate.Onceupdatesarecompletedsuccessfully,arebootpromotesthesecondarypartition.Ifanythinggoeswrong,theoriginalpartitionisavailableforfailback.
Thesystemownerscanalsocontrolwhenthoseupdatesareapplied.Thisgivesustheflexibilitytoprioritizecriticalupdateswhileworkingwithreal-worldschedulingforthemorecommonupdates.Inaddition,theentireupdateissignedandtransmittedviaSSLforaddedsecurityacrosstheentireprocess.
rktAcentralpieceoftheCoreOSecosystemisitsowncontainerruntime,namedrkt.Aswementionedearlier,rktisanotherimplementationwithaspecificfocusonsecurity.rkt’smainadvantageisinrunningtheenginewithoutadaemonasrootthewayDockerdoestoday.Initially,rktalsohadanadvantageinestablishingtrustforcontainerimages.However,recentupdatestoDockerhavemadegreatstrideswiththenewContentTrustfeature.
Thebottomlineisthatrktisstillanimplementationfocusedonsecuritytoruncontainersinproduction.rktdoesuseanimageformatnamedACI,butitalsosupportsrunningDocker-basedimages.Atthetimeofwritingthisbook,itisonlyatversion0.11.0,butit’salreadygainingmomentumasawaytorunDockerimagessecurelyinproduction.
Inaddition,CoreOSrecentlyannouncedintegrationwiththeIntel®VirtualizationTechnology,whichallowscontainerstoruninhigherlevelsofisolation.Thishardware-enhancedsecurityallowsthecontainerstoberuninsideaKernel-basedVirtualMachine(KVM)processprovidingisolationfromthekernelsimilartowhatweseewithhypervisorstoday.
etcdAnothercentralpieceintheCoreOSecosystemworthmentioningistheiropensourceetcdproject.etcdisadistributedandconsistentkey-valuestore.ARESTfulAPIisusedtointerfacewithetcd,soit’seasytointegratewithyourproject.
Ifitsoundsfamiliar,it’sbecausewesawthisprocessrunninginChapter1,KubernetesandContainerOperations,undertheServicesrunningonthemastersection.Kubernetesactuallyutilizesetcdtokeeptrackofclusterconfigurationandcurrentstate.K8susesitfortheservicediscoverycapabilitiesaswell.
KuberneteswithCoreOSNowthatweunderstandthebenefits,let’stakealookataKubernetesclusterusingCoreOS.Thedocumentationsupportsanumberofplatforms,butoneoftheeasiesttospinupisAWSwiththeCoreOSCloudFormationandCLIscripts.
TipIfyouareinterestedinrunningKuberneteswithCoreOSonotherplatforms,youcanfindmoredetailsintheCoreOSdocumentationhere:
https://coreos.com/kubernetes/docs/latest/
WecanfindthelatestscriptsforAWShere:
https://github.com/coreos/coreos-kubernetes/releases/latest
Forthiswalk-through,wewillusev0.1.0(latestatthetimeofwriting)ofthescripts.We’llneedaLinuxmachinewiththeAWSCLIinstalledandconfigured.SeetheWorkingwithotherproviderssectionofChapter1,KubernetesandContainerOperations,fordetailsoninstallingandconfiguringtheAWSCLI.IrecommendthatyouuseaboxwiththeKubernetescontrolscriptsalreadyinstalledtoavoidhavingtodownloadkubectlseparately.
Let’sfirstdownloadandextractthetarballfromGitHubasfollows:
$wgethttps://github.com/coreos/coreos-
kubernetes/releases/download/v0.1.0/kube-aws-linux-amd64.tar.gz
$tarxzvfkube-aws-linux-amd64.tar.gz
Thiswillextractasingleexecutablenamedkube-aws.ThisfilewilllaunchtheAWSinfrastructureinthesamewaythatkube-up.shdidforusearlier.
Beforeweproceed,weneedtocreateakey-pairtouseonAWS.Forthisexample,Icreateonekey-pairnamedkube-aws-key.WecancreateakeyintheconsoleundertheEC2serviceontheleft-handmenuandthenselectKeyPairs.KeyscanalsobecreatedusingtheCLI.
Next,wewillneedtocreateaclusterdefinitionfile.Inthesamefolder,wedownloadedkube-aws;createanewfilefromthelisting7-1:
#UniquenameofKubernetescluster.Inordertodeploy
#morethanoneclusterintothesameAWSaccount,this
#namemustnotconflictwithanexistingcluster.
#clusterName:kubernetes
#NameoftheSSHkeypairalreadyloadedintotheAWS
#accountbeingusedtodeploythiscluster.
keyName:kube-aws-key
#RegiontoprovisionKubernetescluster
region:us-east-1
#AvailabilityZonetoprovisionKubernetescluster
#availabilityZone:
#DNSnameroutabletotheKubernetescontrollernodes
#fromworkernodesandexternalclients.Thedeployer
#isresponsibleformakingthisnameroutable
externalDNSName:kube-aws
#Numberofworkernodestocreate
#workerCount:1
#Locationofkube-awsartifactsusedtodeployanew
#Kubernetescluster.Thenecessaryartifactsarealready
#availableinapublicS3bucketmatchingtheversion
#ofthekube-awstool.Thisparameteristypically
#overwrittenonlyfordevelopmentpurposes.
#artifactURL:https://coreos-kubernetes.s3.amazonaws.com/<VERSION>
Listing7-1:coreos-cluster.yaml
Wehaveafewthingstonote.WehavekeyNamesettothekeywejustcreated,kube-aws-key.Theregionissettous-east-1(NorthernVirginia),soeditthisifyoupreferadifferentregion.Inaddition,clusternameandworkerCountarecommentedout,buttheirdefaultsareaslisted,kubernetesand1,respectively.workerCountdefinesthenumberofslaves,soyoucanincreasethisvalueifyouneedmore.
Inaddition,wehaveaplaceholderDNSentry.ThevalueforexternalDNSNameissettokube-aws.
NoteForsimplicity’ssake,wecansimplyaddanentryforkube-awsinthe/etc/hostsfile.Foraproductionsystem,wewouldwantarealentrythatwecouldexposethroughRoute53,anotherDNSregistrar,oralocalDNSentry.
NowwecanspinuptheCoreOScluster:
$./kube-awsup--config="coreos-cluster.yaml"
WeshouldgetthemasterIPintheconsoleoutputundercontrollerIP.WewillneedtoupdatetheIPaddressforkube-awsinour/etc/hostsfileorDNSprovider.WecanalsogetthemasterIPbycheckingourrunninginstancesinAWS.Itshouldbelabeledkube-aws-controller.
$vi/etc/hosts
Thereyouhaveit!WenowhaveaclusterrunningCoreOS.ThescriptcreatesallthenecessaryAWSresources,suchasVirtualPrivateClouds(VPCs),securitygroups,andIAMrole.
TipNotethatifthisisafreshbox,youwillneedtodownloadkubectlseparatelyasitisnotbundledwithkube-aws:
wgethttps://storage.googleapis.com/kubernetes-
release/release/v1.0.6/bin/linux/amd64/kubectl
Wecannowusekubectltoseeournewcluster:
$kubectl--kubeconfig=clusters/kubernetes/kubeconfiggetnodes
WeshouldseeasinglenodelistedwiththeEC2internalDNSasthename.Notekubeconfig,thistellsKubernetestousetheconfigurationfilefortheclusterwejustcreatedinsteadofthepreviousGCEclusterwehavebeenworkingthusfar.Thisisusefulifwewanttomanagemultipleclustersfromthesamemachine.
TectonicRunningKubernetesonCoreOSisagreatstart,butyoumayfindthatyouwantahigherlevelofsupport.EnterTectonic,theCoreOSenterpriseofferingforrunningKuberneteswithCoreOS.Tectonicusesmanyofthecomponentswe’vealreadydiscussed.CoreOSistheOSandbothDockerandrktruntimesaresupported.Inaddition,Kubernetes,etcd,andflannelarepackagedtogethertogiveafullstackofclusterorchestration.WediscussedflannelbrieflyinChapter3,CoreConcepts–Networking,Storage,andAdvancedServices.ItisanoverlaynetworkthatusesamodelsimilartothenativeKubernetesmodel,anditusesetcdasabackend.
OfferingasupportpackagesimilartoRedHat,CoreOSarealsoproviding24x7supportfortheopensourcesoftwarethatTectonicisbuilton.TectonicalsoprovidesregularclusterupdatesandanicedashboardwithviewsforallthecomponentsofKubernetes.CoreUpdateallowsuserstohavemorecontroloftheautomaticupdates.Inaddition,itshipswithTectonicIdentityforSSOacrosstheclusterandtheQuayEnterprise,whichprovidesasecurecontainerregistrybehindyourownfirewall.
DashboardhighlightsHerearesomehighlightsoftheTectonicdashboard:
Figure7.2.TheTectonicmaindashboard
Tectonicisnowgenerallyavailableandthedashboardalreadyhassomenicefeatures.AsyoucanseeinFigure7.3,wecanseealotofdetailaboutourreplicationcontrollerandcanevenusetheGUItoscaleupanddownwiththeclickofabutton:
Figure7.3.Tectonicreplicationcontrollerdetail
AnothernicefeatureistheStreamingeventspage.Here,wecanwatchtheeventslive,pause,andfilterbasedoneventseverityandresourcetype.
Figure7.4.Eventsstream
Ausefulfeaturetobrowseanywhereinthedashboardsystemisthenamespacefilteringoption.Simplyclickonthegearinthetop-rightcornerofthepage,andwecanfilterourviewsbynamespace.ThiscanbehelpfulifwewanttofilterouttheKubernetessystempodsorjustlookataparticularcollectionofresources.
Figure7.5.Namespacefiltering
SummaryInthischapter,welookedattheemergingstandardsbodiesinthecontainercommunityandhowtheyareshapingthetechnologyforthebetterwithopenspecifications.WealsotookacloserlookatCoreOS,akeyplayerinboththecontainerandKubernetescommunity.Weexploredthetechnologytheyaredevelopingtoenhanceandcomplimentcontainerorchestrationandsawfirst-handhowtousesomeofitwithKubernetes.Finally,welookedatthesupportedenterpriseofferingofTectonicandsomeofthefeaturesthatwillbeavailablesoon.
Footnotes1https://www.opencontainers.org/faq/(#11onthepage)
2https://github.com/opencontainers/specs/blob/master/principles.md
Chapter8.TowardsProduction-ReadyInthischapter,we’lllookatconsiderationstomovetoproduction.Wewillalsoshowsomehelpfultoolsandthird-partyprojectsavailableintheKubernetescommunityatlargeandwhereyoucangotogetmorehelp.
Thischapterwilldiscussthefollowingtopics:
ProductioncharacteristicsTheKubernetesecosystemWheretogethelp
ReadyforproductionWe’vewalkedthroughanumberoftypicaloperationsusingKubernetes.Aswesaw,K8soffersavarietyoffeaturesandabstractionsthateasetheburdenofday-to-daymanagementforcontainerdeployments.
Therearemanycharacteristicsthatdefineaproduction-readysystemforcontainers.Figure8.1providesahigh-levelviewofthemajorconcernsforproduction-readyclusters.Thisisbynomeansanexhaustivelist,butit’smeanttoprovidesomesolidgroundheadingintoproductionoperations.
Figure8.1.Productioncharacteristicsforcontaineroperations.
WesawhowthecoreconceptsandabstractionsofKubernetesaddressafewoftheseconcerns.Theserviceabstractionhasbuiltinservicediscoveryandhealthcheckingatboththeserviceandapplicationlevel.Wealsogetseamlessapplicationupdatesandscalabilityfromthereplicationcontrollerconstruct.Allthreecoreabstractionsofservices,replicationcontrollers,andpodsworkwithacoreschedulingandaffinityrulesetandgiveuseasyserviceandapplicationcomposition.
Thereisabuilt-insupportforavarietyofpersistentstorageoptions,andthenetworkingmodelprovidesmanageablenetworkoperationswithoptionstoworkwithotherthird-partyproviders.Also,wetookabrieflookatCI/CDintegrationwithsomeofthepopular
toolsinthemarketplace.
Furthermore,wehavebuilt-insystemeventstracking,andwiththemajorcloudproviders,anout-of-theboxsetupformonitoringandlogging.Wealsosawhowthiscanbeextendedwiththird-partyproviderssuchasStackDriverandSysdig.Theseservicesalsoaddressoverallnodehealthandproactivetrenddeviationalerts.
Thecoreconstructsalsohelpusaddresshighavailabilityinourapplicationandservicelayers.Theschedulercanbeusedwithautoscalingmechanismstoprovidethisatanodelevel.ThereisalsoasupporttomaketheKubernetesmasteritselfhighlyavailable.
Wefinallyexploredanewbreedofoperatingsystemsthatgiveusaslimbasetobuildonandsecureupdatemechanismsforpatchingandupdates.Theslimbase,togetherwithscheduling,canhelpuswithefficientresourceutilization.Inaddition,thereisfunctionalityintheOSandDockeritselffortrustedimageverification.
SecurityWehavenotexploredmanyoftheareasaroundsecurityindepth.Thesubjectitselfcouldfillitsownbook.However,Kubernetesdoesprovideoneveryimportantconstructoutoftheboxnamedsecrets.
Secretsgiveusawaytostoresensitiveinformationwithoutincludingplaintextversionsinourresourcedefinitionfiles.Secretscanbemountedtothepodsthatneedthemandthenaccessedwithinthepodasfileswiththesecretvaluesascontent.
Secretsarestillintheirearlystages,butavitalcomponentforproductionoperations.Thereareseveralimprovementsplannedhereforfuturereleases.
Tolearnmoreaboutsecretsandevengetawalk-through,checkouttheSecretssectionintheK8suserguideathttp://kubernetes.io/v1.0/docs/user-guide/secrets.html.
Ready,set,goWhiletherearestillsomegaps,avarietyoftheremainingsecurityandoperationsconcernsareactivelybeingaddressesbythird-partycompaniesaswewillseeinthefollowingsection.Goingforward,theKubernetesprojectwillcontinuetoevolve,andthecommunityofprojectsandpartnersaroundK8sandDockerwillalsogrow.Thecommunityisclosingtheremaininggapsataphenomenalpace.
Third-partycompaniesSincetheKubernetesproject’sinitialrelease,therehasbeenagrowingecosystemofpartners.WelookedatCoreOSinthepreviouschapter,buttherearemanymoreprojectsandcompaniesinthisspace.Wewillhighlightafewthatmaybeusefulasyoumovetowardsproduction.
PrivateregistriesInmanysituations,organizationswillnotwanttoplacetheirapplicationsand/orintellectualpropertyinpublicrepositories.Forthosecases,aprivateregistrysolutionishelpfulinsecurelyintegratingdeploymentsendtoend.
GoogleCloudofferstheGoogleContainerRegistry:https://cloud.google.com/container-registry/.
DockerhastheirownTrustedRegistryoffering:https://www.docker.com/docker-trusted-registry.
Quay.ioalsoprovidessecureprivateregistries,vulnerabilityscanning,andcomesfromtheCoreOSteam:https://quay.io/.
GoogleContainerEngineGooglewasthemainauthoroftheoriginalKubernetesprojectandstillamajorcontributor.AlthoughthisbookhasmostlyfocusedonrunningKubernetesonourown,GoogleisalsoofferingafullymanagedcontainerservicethroughtheGoogleCloudPlatform.
NoteFindmoreinformationontheGoogleContainerEngine(GKE)website:
https://cloud.google.com/container-engine/
KuberneteswillbeinstalledonGCEandmanagedbyGoogleengineers.Theyalsoprovideprivateregistriesandintegrationwithyourexistingprivatenetworks.
NoteCreateyourfirstGKEcluster
FromtheGCPconsole,underCompute,clickonContainerEngineandthenContainerClusters.
Ifthisisyourfirsttimecreatingacluster,you’llhaveaninformationboxinthemiddleofthepage.ClickontheCreateacontainerclusterbutton.
Chooseanameforyourclusterandthezone.You’llalsobeabletochoosethemachinetype(instancesize)foryournodesandhowmanynodes(clustersize)youwantinyourcluster.ThemasterismanagedandupdatedbytheGoogleteamthemselves.LeavetheCloudLoggingchecked.ClickonCreate,andinafewminutes,you’llhaveanewclusterreadyforuse.
You’llneedkubectlthatisincludedwiththeGoogleSDKtobeginusingyourGKEcluster.RefertoChapter1,KubernetesandContainerOperations,fordetailsoninstallingtheSDK.OncewehavetheSDK,wecanconfigurekubectlandtheSDKforourclusterusingthestepsoutlinedathttps://cloud.google.com/container-engine/docs/before-you-begin#install_kubectl.
TwistlockTwistlock.ioisavulnerabilityandhardeningtooltailor-madeforcontainers.Theyprovidetheabilitytoenforcepolicyandauditriskatthecontainerlevelitself.WhilenotspecificallydesignedforKubernetes,thispromisestobeacorepieceofgovernanceandcomplianceforcontaineroperations.Hereisabriefdescriptionfromtheirwebsite:
“Twistlockisthefirstsecuritysolutiondesignedspecificallytoprotectcontainerizedcomputingandmicro-services.
TheTwistlockSecuritySuitedetectsvulnerabilities,hardenscontainerimages,andenforcessecuritypoliciesacrossthelifecycleofapplications.
Weareportableandagentless;weruneverywhereyourcontainersdo…devworkstations,publicclouds,privateclouds.”
NotePleaserefertotheTwistlockwebsiteformoreinformation:
https://www.twistlock.io/
KismaticKismaticwasfoundedbyafewfolkswithtiestoboththeKubernetesandtheMesosecosystems.TheyareaimingtoprovideenterprisesupportforKubernetes.TheywereearlycontributorsandbuiltmuchoftheuserinterfacewesawinChapter1,KubernetesandContainerOperations.Inaddition,theyarebuildingthefollowingplugins,aslistedontheirsite.
“Role-basedaccesscontrols(RBAC):Cluster-levelvirtualizationisachievedusingKubernetesnamespaces,amechanisminKubernetesforpartitioningresourcescreatedbyusersintoalogicallynamedgroup.WeextendKubernetesnamespaceswithsupportforRBAC,thestandardenterprisesystemssecuritymethodusedtoimplementmandatoryaccesscontrol(MAC)ordiscretionaryaccesscontrol(DAC).
Kerberosforbedrockauthentication:Kubernetescurrentlyusesclientcertificates,tokens,orHTTPbasicauthenticationtoauthenticateusersforAPIcalls.Formanyenterprises,thislevelofauthenticationfailstomeetproductiondemands.KismaticextendsexistingfunctionalitybytakingtheAPIservertokensissuedaftertheuserhasbeen(re)authenticatedandintegratingwithbedrockauthenticationinKerberos.
LDAP/ADintegration:Forenterpriseslookingtomanageuseraccessviaexistingdirectoryservices,KismaticintegratesKubernetessuchservicesforauthenticationthroughLDAP/ActiveDirectory.
Auditingcontrols:Incompliancesensitiveenterpriseenvironments,wehaverecognizedthatrichauditingandlogginginstrumentationandpersistencearekeytoproductionstability.Therefore,weareexcitedtoannounceourauditlogpluginforKubernetes,providingatrustedwaytotracksecurity-relevantinformationonyourrunningKubernetesmicroservicesandclusteractivities.”
NotePleaserefertothefollowingKismaticwebsiteformoreinformation:
https://kismatic.com/
Mesosphere(KubernetesonMesos)Mesosphereitselfisbuildingacommerciallysupportedproduct(DCOS)aroundtheopensourceApacheMesosproject.ApacheMesosisaclustermanagementsystemthatoffersschedulingandresourcesharingabitlikeKubernetesitself,butatamuchhigherlevel.Theopensourceprojectisusedbyseveralwell-knowncompanies,suchasTwitterandAirBnB.
NoteGetmoreinformationontheMesosOSprojectandtheMesosphereofferingsatthesesites:
http://mesos.apache.org/https://mesosphere.com/
Mesosbyitsnatureismodularandallowstheuseofdifferentframeworksforavarietyofplatforms.AKubernetesframeworkisnowavailable,sowecantakeadvantageoftheclustermanaginginMesoswhilestillmaintainingtheusefulapplication-levelabstractionsinK8s.Refertothefollowinglink:
https://github.com/mesosphere/kubernetes-mesos
DeisTheDeisprojectprovidesanopensourcePlatformasaService(PaaS)solution.ThisallowscompaniestodeploytheirownPaaSonpremiseorinthepubliccloud.DeisusesCoreOSasanunderlyingoperatingsystemandrunsapplicationsinDocker.Version1.9nowhasthepreviewsupportforKubernetesasascheduler.Whilethisisnotproduction-readyatthemoment,it’sagoodonetowatchifyouareinterestedindeployingyourownPaaS.
NoteYoucanrefertothefollowingwebsiteformoreinformationonDeis:
http://docs.deis.io/en/latest/customizing_deis/choosing-a-scheduler/#k8s-scheduler
OpenShiftAnotherPaaSsolutionisOpenShiftfromRedHat.TheOpenShiftplatformusestheRedHatAtomicplatformasasecureandslimOSforrunningcontainers.Inversion3,KuberneteshasbeenaddedastheorchestrationlayerforallcontaineroperationsonyourPaaS.ThisisgreatcombinationtomanagePaaSinstallationsatalargescale.
NoteMoreinformationonOpenShiftcanbefoundhere:
https://enterprise.openshift.com/
WheretolearnmoreTheKubernetesprojectisanopensourceeffort,sothereisabroadcommunityofcontributorsandenthusiasts.OnegreatresourceinordertofindmoreassistanceistheKubernetesSlackchannelasfollows:
http://slack.kubernetes.io/
ThereisalsoacontainersgrouponGooglegroups.Youcanjoinhere:
https://groups.google.com/forum/#!forum/google-containers
Ifyouenjoyedthisbook,youcanfindmoreofmyarticles,howtos,andvariousmusingsonmyblogsandtwitterpageasfollows:
http://www.cloudtp.com/meet-the-advisors/jonathan-baier/https://medium.com/@grizzbaierhttps://twitter.com/grizzbaier
SummaryInthisfinalchapter,weleftafewbreadcrumbstoguideyouonyourcontinuedjourneywithKubernetes.Youshouldhaveasolidsetofproductioncharacteristicstogetyoustarted.ThereisawidecommunityinboththeDockerandKubernetesworld.Therearealsoafewadditionalresourcesweprovidedifyouneedafriendlyfacealongtheway.
Bynow,wehaveseenthefullspectrumofcontaineroperationswithKubernetes.YoushouldbemoreconfidentinhowKubernetescanstreamlinethemanagementofyourcontainerdeploymentsandhowyoucanplantomovecontainersoffthedeveloperlaptopsandontoproductionservers.
IndexA
ACI/rktadvancedservices
about/Advancedservicesexternalservices/Externalservicesinternalservices/Internalservicescustomloadbalancing/Customloadbalancingcross-nodeproxy/Cross-nodeproxycustomports/Customportsmultipleports/Multipleportsmigrations/Migrations,multicluster,andmoremulticluster/Migrations,multicluster,andmorecustomaddressing/Customaddressing
alerting,systemmonitoringwithSysdigabout/AlertingBaselineoption/AlertingHostComparisonoption/Alerting
AmazonWebServices(AWS)/OurfirstclusterApache/Whatisacontainer?appcspecification/Standardcontainerspecificationapplications
scalingup/Scalingupupdates/Smoothupdates
applicationschedulingabout/Applicationschedulingexample/Schedulingexample
architecture,Kubernetesabout/Thearchitecturemaster/Masternodes/Node(formerlyminions)
Bbalanceddesign
about/BalanceddesignBorderGatewayProtocol(BGP)/ProjectCalicoBorg/AdvantagesofKubernetesbuilt-inmonitoring
about/Built-inmonitoringHeapsterexploring/ExploringHeapsterdashboards,customizing/Customizingourdashboards
CcAdvisor
about/Built-inmonitoringURL/Built-inmonitoring
CloudFounrdy/StandardcontainerspecificationCloudNativeComputingFoundation(CNCF)/CloudNativeComputingFoundationcloudvolumes,persistentstorage
about/CloudvolumesGCEpersistentdisks/GCEpersistentdisksAWSElasticBlockStore/AWSElasticBlockStore
clusterabout/OurfirstclusterKubernetesUI/KubernetesUIGrafana/GrafanaSwagger/Swaggercommandline/Commandlineservices,runningonmaster/Servicesrunningonthemasterservices,runningonminions/Servicesrunningontheminionsresetting/Resettingtheclustergrowing/Growingyourclusterscalingup,onGCE/ScalinguptheclusteronGCEscalingdown/Autoscalingandscalingdownautoscaling/Autoscalingandscalingdownscalingup,onAWS/ScalinguptheclusteronAWSscalingmanually/Scalingmanually
commandline/CommandlineCommandLineInterface(CLI)/Workingwithotherproviderscontainer’safterlife/Thecontainer’safterlifecontainerOSes/CoreOScontainers
about/Abriefoverviewofcontainers,Whatisacontainer?advantages/Whyarecontainerssocool?advantages,toContinuousIntegration/AdvantagestoContinuousIntegration/ContinuousDeploymentadvantages,toContinuousDevelopment/AdvantagestoContinuousIntegration/ContinuousDeploymentresourceutilization/Resourceutilization
content-agnostic/StandardcontainerspecificationContentTrustfeature/rktcontinuousdelivery
integratingwith/IntegrationwithcontinuousdeliveryContinuousIntegration/AdvantagestoContinuousIntegration/ContinuousDeployment
Contribabout/Built-inmonitoring
Controlgroups(cGroups)/Whatisacontainer?coreconstructs,Kubernetes
about/Coreconstructspods/Podslabels/Labelscontainer’safterlife/Thecontainer’safterlifeservices/Servicesreplicationcontrollers(RCs)/Replicationcontrollers
CoreOSabout/CoreOSrkt/rktetcd/etcd
CoreOSCloudFormation/KuberneteswithCoreOSCoreUpdate/Tectoniccsysdigcommand-lineUI
about/Thecsysdigcommand-lineUIcutovers/Testing,releases,andcutovers
DDeis
about/Deisdenial-of-serviceattacks/Whatisacontainer?designedforautomation/StandardcontainerspecificationDNS
about/DNSDocker/ThearchitectureDockerEngine
about/DockerDockerplugins
about/Dockerplugins(libnetwork)DomainNameSystem(DNS)/Node(formerlyminions)
EElasticsearch/Workingwithotherprovidersexampleenvironment
settingup/Examplesetup
FFabric8
about/BonusfunURL/Bonusfun
Flannelabout/Flannel
FluentDabout/FluentDexploring/FluentD
GGCEmonitoring
signingup/Sign-upforGCEmonitoringdetailedmonitoring,configuring/Configuredetailedmonitoringalerts/Alerts
GoogleCloudLoggingabout/FluentDandGoogleCloudLogging
GoogleCloudPlatform(GCP)/OurfirstclusterGoogleComputeEngine(GCE)/OurfirstclusterGoogleContainerEngine
about/GoogleContainerEngineGrafana
about/GrafanaGulp.js
about/Gulp.jsprerequisites/PrerequisitesGulpbuildexample/Gulpbuildexample
Hhealthchecks
about/HealthchecksTCPchecks/TCPcheckslifecyclehooks/Lifecyclehooksorgracefulshutdown
HeapsterURL/Built-inmonitoringabout/Built-inmonitoringexploring/ExploringHeapster
Iindustrial-gradedelivery/StandardcontainerspecificationInfluxDB
about/Built-inmonitoringinfrastructure-agnostic/StandardcontainerspecificationIntel®VirtualizationTechnology/rktiptables/Advancedservices
JJavaScript/IntegrationwithcontinuousdeliveryJenkins
about/Integrationwithcontinuousdelivery
KK8s/ThearchitectureKernel-basedVirtualMachine(KVM)process/rktKeyPairs/KuberneteswithCoreOSKibana/WorkingwithotherprovidersKismatic
about/KismaticKube-proxy/Node(formerlyminions)kube-proxydaemons/ServicesKubernetes
advantages/AdvantagesofKubernetesarchitecture/Thearchitecturecoreconstructs/Coreconstructs
Kubernetes,withCoreOSabout/KuberneteswithCoreOS
Kubernetesapplicationabout/OurfirstKubernetesapplication
Kubernetesnetworkingabout/Kubernetesnetworking
KubernetespluginforJenkinsabout/KubernetespluginforJenkinsprerequisites/Prerequisitesinstalling/Installingpluginsconfiguring/ConfiguringtheKubernetesplugin
Kubernetesprojectabout/Wheretolearnmorereferences/Wheretolearnmore
KubernetesSlackchannelreference/Wheretolearnmore
KubernetesUIabout/KubernetesUI
kublet/Node(formerlyminions),Built-inmonitoring
Llabels
about/Labels,MoreonlabelsLevelDB
about/Built-inmonitoring
Mmaster
about/MasterMesosphere
about/Mesosphere(KubernetesonMesos)URL/Mesosphere(KubernetesonMesos)
microservicesabout/Microservicesandorchestrationfuturechallenges/Futurechallenges
monitoringoperationsmaturing/MaturingourmonitoringoperationsGCE/GCE(StackDriver)StackDriver/GCE(StackDriver)
multitenancyabout/Multitenancy,Limits
Nnamespaces/Whatisacontainer?NetworkAddressTranslation(NAT)/Kubernetesnetworking,Dockernetworking
about/Kubernetesnetworkingnetworkingcomparisons
about/NetworkingcomparisonsDockerEngine/DockerDockerplugins/Dockerplugins(libnetwork)Weave/WeaveFlannel/FlannelProjectCalico/ProjectCalico
Nginx/Whatisacontainer?node
about/Node(formerlyminions)Node.js/Integrationwithcontinuousdeliverynodepackagemanage(npm)/Prerequisites
OOmega/AdvantagesofKubernetesOpenContainerInitiative(OCI)
about/OpenContainerInitiativeOpenShift
about/OpenShiftURL/OpenShift
operationsmonitoring/Monitoringoperations
orchestrationabout/Microservicesandorchestration
overlaydriver/Dockerplugins(libnetwork)
Ppersistentdisks(PDs)/Persistentstoragepersistentstorage
about/Persistentstoragereference/OtherPDoptions
placeholder/KubernetesnetworkingPlatformasaService(PaaS)/Deispodinfrastructurecontainer/Kubernetesnetworkingpods
about/Podsexample/Podexample
portmapping/Dockerprivateregistries
about/PrivateregistriesProjectCalico
about/ProjectCalicoproviders
workingwith/Workingwithotherproviders
QQuayEnterprise/Tectonic
Rreadyforproduction
about/ReadyforproductionRedHatEnterpriseLinuxAtomicHost/CoreOSRedHatLinux/Whatisacontainer?releases/Testing,releases,andcutoversreplicationcontrollers(RCs)
about/ReplicationcontrollersrunCimplementation/Standardcontainerspecification
Sscheduler/Mastersecurity
about/SecuritySELinux/CoreOSservicediscovery
about/Servicediscoveryservices
about/ServicesSoftware-definedNetworking(SDN)/KubernetesnetworkingStackDriver
about/GCE(StackDriver)standardcontainerspecification
about/Standardcontainerspecificationstandardoperations/Standardcontainerspecificationstandards
importance/TheimportanceofstandardsSwagger
about/SwaggerURL/Swagger
SysdigCloudabout/SysdigClouddetailedviews/Detailedviewstopologyviews/Topologyviewsmetrics/Metrics
Sysdigcommandlineabout/TheSysdigcommandline
systemmonitoring,withSysdigabout/BeyondsystemmonitoringwithSysdigSysdigCloud/SysdigCloudalerting/AlertingKubernetessupport/KubernetessupportSysdigcommandline/TheSysdigcommandlinecsysdigcommand-lineUI/Thecsysdigcommand-lineUI
TTectonic
about/Tectonicdashboardhighlights/Dashboardhighlights
temporarydisksabout/Temporarydiskscloudvolumes/Cloudvolumes
testing/Testing,releases,andcutoversthird-partycompanies
about/Third-partycompaniesprivateregisteries/PrivateregistriesGoogleContainerEngine/GoogleContainerEngineTwistlock.io/TwistlockKismatic/KismaticMesosphere/Mesosphere(KubernetesonMesos)Deis/DeisOpenShift/OpenShift
Twistlockabout/Twistlock
UUbuntu/Whatisacontainer?UbuntuSnappy/CoreOSunionfilesystems/Whatisacontainer?
VVirtualExtensibleLAN(VXLAN)/WeaveVirtualMachine(VM)/AdvantagestoContinuousIntegration/ContinuousDeploymentVirtualPrivateCloud(VPC)/WorkingwithotherprovidersVirtualPrivateClouds(VPCs)/KuberneteswithCoreOSVMwarePhoton/CoreOSvSphere/CoreOS
WWeave
about/Weave