= kubernetes = <> * https://www.katacoda.com/courses/kubernetes * deploy docker images in containers. Cluster has nodes. Nodes has pods/containers. Each cluster might correspond to a service. * Ingress, An API object that manages external access to the services in a cluster, typically HTTP. Ideas like reverse-proxy and load balancer. * minikube version # check version 1.2.0 * minikube start {{{#!highlight bash minikube version # minikube version: v1.2.0 minikube start # * minikube v1.2.0 on linux (amd64) # * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... # * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5 # - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf # * Pulling images ... # * Launching Kubernetes ... # # * Configuring local host environment ... # * Verifying: apiserver proxy etcd scheduler controller dns # * Done! kubectl is now configured to use "minikube" }}} == cluster details and health status == {{{#!highlight bash kubectl cluster-info # Kubernetes master is running at https://172.17.0.30:8443 # KubeDNS is running at https://172.17.0.30:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy # # To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. }}} == get cluster nodes == {{{#!highlight bash kubectl get nodes # NAME STATUS ROLES AGE VERSION # minikube Ready master 3m1s v1.15.0 }}} == Deploy containers == {{{#!highlight bash # deploy container kubectl create deployment first-deployment --image=katacoda/docker-http-server # deployment.apps/first-deployment created # deploy container in cluster # check pods kubectl get pods # NAME READY STATUS RESTARTS AGE # first-deployment-8cbf74484-s2fkl 1/1 Running 0 25s # expose deployment kubectl expose deployment first-deployment --port=80 --type=NodePort # service/first-deployment exposed kubectl get svc first-deployment # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # first-deployment NodePort 10.98.246.87 80:31219/TCP 105s # do request to port 80 in cluster ip curl 10.98.246.87:80 #

This request was processed by host: first-deployment-8cbf74484-s2fkl

# curl host01:31219 #

This request was processed by host: first-deployment-8cbf74484-s2fkl

}}} == dashboard == {{{#!highlight bash minikube addons enable dashboard #The Kubernetes dashboard allows you to view your applications # in a UI. # * dashboard was successfully enabled kubectl apply -f /opt/kubernetes-dashboard.yaml # only in katacoda # service/kubernetes-dashboard-katacoda created # check progress kubectl get pods -n kube-system -w #check progress # NAME READY STATUS RESTARTS AGE # coredns-5c98db65d4-b2kxm 1/1 Running 0 17m # coredns-5c98db65d4-mm567 1/1 Running 1 17m # etcd-minikube 1/1 Running 0 16m # kube-addon-manager-minikube 1/1 Running 0 16m # kube-apiserver-minikube 1/1 Running 0 16m # kube-controller-manager-minikube 1/1 Running 0 16m # kube-proxy-pngm9 1/1 Running 0 17m # kube-scheduler-minikube 1/1 Running 0 16m # kubernetes-dashboard-7b8ddcb5d6-xt5nt 1/1 Running 0 76s # storage-provisioner 1/1 Running 0 17m # dashboard url https://2886795294-30000-kitek05.environments.katacoda.com/ # how to launch a Single Node Kubernetes cluster. }}} == Init master == {{{#!highlight bash master $ kubeadm init --kubernetes-version $(kubeadm version -o short) [init] Using Kubernetes version: v1.14.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.69] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 16.503433 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system"Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: xfvno5.q2xfb2m3nw7grdjm [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approveCSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.17.0.69:6443 --token xfvno5.q2xfb2m3nw7grdjm \ --discovery-token-ca-cert-hash sha256:26d11c038d236967630d401747f210af9e3679fb1638e8b599a2da4cb98ab159 }}} {{{#!highlight bash # In master mkdir -p $HOME/.kube pwd # /root sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo $KUBECONFIG/root/.kube/config }}} == Deploy cni weaveworks - deploy a pod network to the cluster == Container Network Interface (CNI) defines how the different nodes and their workloads should communicate. Weave Net provides a network to connect all pods together, implementing the Kubernetes model. Kubernetes uses the Container Network Interface (CNI) to join pods onto Weave Net. {{{#!highlight bash # In master kubectl apply -f /opt/weave-kube # serviceaccount/weave-net created # clusterrole.rbac.authorization.k8s.io/weave-net created # clusterrolebinding.rbac.authorization.k8s.io/weave-net created # role.rbac.authorization.k8s.io/weave-net created # rolebinding.rbac.authorization.k8s.io/weave-net created # daemonset.extensions/weave-net created kubectl get pod -n kube-system # NAME READY STATUS RESTARTS AGE # coredns-fb8b8dccf-b9rd7 1/1 Running 0 11m # coredns-fb8b8dccf-sfgbn 1/1 Running 0 11m # etcd-master 1/1 Running 0 10m # kube-apiserver-master 1/1 Running 0 10m # kube-controller-manager-master 1/1 Running 0 10m # kube-proxy-l42wp 1/1 Running 0 11m # kube-scheduler-master 1/1 Running 1 10m # weave-net-mcxml 2/2 Running 0 84s }}} == Join cluster == {{{#!highlight bash # In master kubeadm token list # check tokens # TOKEN TTL EXPIRES USAGES DESCRIPTION # EXTRA GROUPS # xfvno5.q2xfb2m3nw7grdjm 23h 2019-07-28T16:19:18Z authentication,signing The default bootstrap # token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token }}} {{{#!highlight bash # in node01 # join cluster kubeadm join --discovery-token-unsafe-skip-ca-verification --token=xfvno5.q2xfb2m3nw7grdjm 172.17.0.69:6443 # [preflight] Running pre-flight checks # [preflight] Reading configuration from the cluster... # [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' # [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace # [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" # [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" # [kubelet-start] Activating the kubelet service # [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... # # This node has joined the cluster: # * Certificate signing request was sent to apiserver and a response was received. # * The Kubelet was informed of the new secure connection details. # # Run 'kubectl get nodes' on the control-plane to see this node join the cluster. # The --discovery-token-unsafe-skip-ca-verification tag is used to bypass the Discovery Token verification. # in master kubectl get nodes # NAME STATUS ROLES AGE VERSION # master Ready master 17m v1.14.0 # node01 Ready 107s v1.14.0 # bootstrap token generated b # in node01 kubectl get nodes # The connection to the server localhost:8080 was refused - did you specify the right host or port }}} == Deploy container in cluster == {{{#!highlight bash # In master kubectl create deployment http --image=katacoda/docker-http-server:latest # deployment.apps/http created kubectl get pods # NAME READY STATUS RESTARTS AGE # http-7f8cbdf584-74pd9 1/1 Running 0 11s docker ps | grep http-server # In node01 docker ps | grep http-serveradb3cde7f861 # katacoda/docker-http-server "/app" # About a minute ago # Up About a minute k8s_docker-http-server_http-7f8cbdf584-74pd9_default_04a # 17065-b08d-11e9-bff1-0242ac110045_0 # expose deployment in master kubectl get pods # NAME READY STATUS RESTARTS AGE # http-7f8cbdf584-74pd9 1/1 Running 0 17m bootstrap # token generated b kubectl expose deployment http --port=80 --type=NodePort # service/http exposed kubectl get service http # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # http NodePort 10.101.65.149 80:30982/TCP 49s curl 10.101.65.149:80 #

This request was processed by host: http-7f8cbdf584-74pd9

curl http://10.101.65.149 #

This request was processed by host: http-7f8cbdf584-74pd9

}}} == Apply dashboard in cluster == * Dashboard General-purpose web UI for Kubernetes clusters Dashboard Version: v1.10.0 {{{#!highlight sh # In master kubectl apply -f dashboard.yaml # secret/kubernetes-dashboard-certs created # serviceaccount/kubernetes-dashboard created # role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created # rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created # deployment.apps/kubernetes-dashboard created # service/kubernetes-dashboard created kubectl get pods -n kube-system # NAME READY STATUS RESTARTS AGEcoredns-fb8b8dccf-b9rd7 # 1/1 Running 0 42mcoredns-fb8b8dccf-sfgbn 1/1 Running # 0 42m # etcd-master 1/1 Running 0 41m # kube-apiserver-master 1/1 Running 0 40m # kube-controller-manager-master 1/1 Running 0 40m # kube-proxy-gwrps 1/1 Running 0 26m # kube-proxy-l42wp 1/1 Running 0 42m # kube-scheduler-master 1/1 Running 1 40m # kubernetes-dashboard-5f57845f9d-ls7q2 0/1 ContainerCreating 0 2s # weave-net-gww8b 2/2 Running 0 26m # weave-net-mcxml 2/2 Running 0 31m }}} Create service account for dashboard {{{#!highlight yaml cat < 80:30982/TCP 17m # kubernetes ClusterIP 10.96.0.1 443/TCP 56m }}} == Start containers using Kubectl == {{{#!highlight bash minikube start # start kubernetes cluster and its components # * minikube v1.2.0 on linux (amd64) # * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... # * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5 # - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf # * Pulling images ... # * Launching Kubernetes ... # * Configuring local host environment ... # * Verifying: apiserver proxy etcd # # scheduler controller dns # * Done! kubectl is now configured to use "minikube" kubectl get nodes # NAME STATUS ROLES AGE VERSION # minikube Ready master 2m2s v1.15.0 kubectl get service # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # kubernetes ClusterIP 10.96.0.1 443/TCP 2m18s # This deployment is issued to the Kubernetes master which launches the Pods and containers required. Kubectl run_ is similar to docker run but at a cluster level. # launch a deployment called http which will start a container based on the Docker Image katacoda/docker-http-server:latest. kubectl run http --image=katacoda/docker-http-server:latest --replicas=1 # kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version.Use kubectl run --generator=run-pod/v1 or kubectl create instead. # deployment.apps/http created kubectl get deployments # NAME READY UP-TO-DATE AVAILABLE AGE # http 1/1 1 1 6s # you can describe the deployment process. kubectl describe deployment http # expose the container port 80 on the host 8000 binding to the external-ip of the host. kubectl expose deployment http --external-ip="172.17.0.13" --port=8000 --target-port=80 # service/http exposed curl http://172.17.0.13:8000 #

This request was processed by host: http-5fcf9dd9cb-zfkkz

kubectl get pods # NAME READY STATUS RESTARTS AGE # http-5fcf9dd9cb-zfkkz 1/1 Running 0 3m26s kubectl get service # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # http ClusterIP 10.100.157.159 172.17.0.13 8000/TCP 57s # kubernetes ClusterIP 10.96.0.1 443/TCP 7m41s curl http://10.100.157.159:8000 #

This request was processed by host: http-5fcf9dd9cb-zfkkz

kubectl run httpexposed --image=katacoda/docker-http-server:latest --replicas=1 --port=80 --host port=8001 # kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. # Use kubectl run --generator=run-pod/v1 or kubectl create instead. # deployment.apps/httpexposed created curl http://172.17.0.13:8001 #

This request was processed by host: httpexposed-569df5d86-rzzhb

kubectl get svc # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # http ClusterIP 10.100.157.159 172.17.0.13 8000/TCP 3m50s # kubernetes ClusterIP 10.96.0.1 443/TCP 10m kubectl get pods # NAME READY STATUS RESTARTS AGE # http-5fcf9dd9cb-zfkkz 1/1 Running 0 7m9s # httpexposed-569df5d86-rzzhb 1/1 Running 0 36s # Scaling the deployment will request Kubernetes to launch additional Pods. kubectl scale --replicas=3 deployment http # deployment.extensions/http scaled kubectl get pods # amount of pods for service http increased to 3 # NAME READY STATUS RESTARTS AGE # http-5fcf9dd9cb-fhljh 1/1 Running 0 31s # http-5fcf9dd9cb-wb2dh 1/1 Running 0 31s # http-5fcf9dd9cb-zfkkz 1/1 Running 0 9m27s # httpexposed-569df5d86-rzzhb 1/1 Running 0 2m54s # Once each Pod starts it will be added to the load balancer service. kubectl get service # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # http ClusterIP 10.100.157.159 172.17.0.13 8000/TCP 7m28s # kubernetes ClusterIP 10.96.0.1 443/TCP 14m kubectl describe svc http # Name: http # Namespace: defaultLabels: run=httpAnnotations: # Selector: run=httpType: ClusterIPIP: 10.100.157.159 # External IPs: 172.17.0.13 # Port: 8000/TCP # TargetPort: 80/TCP # Endpoints: 172.18.0.4:80,172.18.0.6:80,172.18.0.7:80 # Session Affinity: None # Events: curl http://172.17.0.13:8000 #

This request was processed by host: http-5fcf9dd9cb-wb2dh

curl http://172.17.0.13:8000 #

This request was processed by host: http-5fcf9dd9cb-fhljh

curl http://172.17.0.13:8000 #

This request was processed by host: http-5fcf9dd9cb-zfkkz

}}} == Certified kubernetes application developer == * https://www.katacoda.com/courses/kubernetes/first-steps-to-ckad-certification {{{#!highlight bash # In master launch.sh # Waiting for Kubernetes to start... # Kubernetes started kubectl get nodes # NAME STATUS ROLES AGE VERSION # master Ready master 85m v1.14.0 # node01 Ready 85m v1.14.0 # deploy app kubectl create deployment examplehttpapp --image=katacoda/docker-http-server # deployment.apps/examplehttpapp created # view all deployments kubectl get deployments # NAME READY UP-TO-DATE AVAILABLE AGE # examplehttpapp 1/1 1 1 25s # A deployment will launch a set of Pods. A pod is a group of one or more containers deployed across the cluster. kubectl get pods # NAME READY STATUS RESTARTS AGE # examplehttpapp-58f66848-n7wn7 1/1 Running 0 71s # show pod ip and node where it is kubectl get pods -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE # READINESS GATES # examplehttpapp-58f66848-n7wn7 1/1 Running 0 113s 10.44.0.2 node01 # describe pod kubectl describe pod examplehttpapp-58f66848-n7wn7 # Name: examplehttpapp-58f66848-n7wn7 # Namespace: default # Priority: 0 # PriorityClassName: # Node: node01/172.17.0.24 # Start Time: Sat, 27 Jul 2019 17:59:35 +0000 # Labels: app=examplehttpapp # pod-template-hash=58f66848 # Annotations: # Status: Running # IP: 10.44.0.2 # List all namespaces with in the cluster with kubectl get namespaces kubectl get ns # The namespaces can be used to filter queries to the available objects. kubectl get pods -n kube-system kubectl create ns testns # namespace/testns created kubectl create deployment namespacedeg -n testns --image=katacoda/docker-http-server # deployment.apps/namespacedeg created kubectl get pods -n testns # NAME READY STATUS RESTARTS AGE # namespacedeg-74dcc7dc64-wcxnj 1/1 Running 0 3s kubectl get pods -n testns -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES # namespacedeg-74dcc7dc64-wcxnj 1/1 Running 0 18s 10.44.0.3 node01 # Kubectl can help scale the number of Pods running for a deployment, referred to as replicas. kubectl scale deployment examplehttpapp --replicas=5 # deployment.extensions/examplehttpapp scaled kubectl get deployments -o wide # NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR # examplehttpapp 5/5 5 5 9m6s docker-http-server katacoda/docker-http-server app=examplehttpapp kubectl get pods -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES # examplehttpapp-58f66848-cf6pl 1/1 Running 0 65s 10.44.0.6 node01 # examplehttpapp-58f66848-lfrq4 1/1 Running 0 65s 10.44.0.5 node01 # examplehttpapp-58f66848-n7wn7 1/1 Running 0 9m26s 10.44.0.2 node01 # examplehttpapp-58f66848-snwl7 1/1 Running 0 65s 10.44.0.7 node01 # examplehttpapp-58f66848-vd8db 1/1 Running 0 65s 10.44.0.4 node01 # everything within Kubernetes is controllable as YAML. kubectl edit deployment examplehttpapp # opens vi/vim, changing the spec.replicas to 10 after saving the file increased the number of pods kubectl get nodes -o wide # NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME # master Ready master 102m v1.14.0 172.17.0.19 Ubuntu 16.04.6 LTS 4.4.0-150-generic docker://18.9.5 # node01 Ready 102m v1.14.0 172.17.0.24 Ubuntu 16.04.6 LTS 4.4.0-150-generic docker://18.9.5 # Image can be changed using the set image command, rollout # apply new image to the pods/containers kubectl --record=true set image deployment examplehttpapp docker-http-server=katacoda/docker-http-server:v2 # deployment.extensions/examplehttpapp image updated kubectl rollout status deployment examplehttpapp # Waiting for deployment "examplehttpapp" rollout to finish: 1 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 1 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 1 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 2 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 2 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 2 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 1 old replicas are pending termination... # Waiting for deployment "examplehttpapp" rollout to finish: 1 old replicas are pending termination... deployment "examplehttpapp" successfully rolled out # rollback deployment kubectl rollout undo deployment examplehttpapp # deployment.extensions/examplehttpapp rolled back # The expose command will create a new service for a deployment. The port specifies the port of the application we want to available kubectl expose deployment examplehttpapp --port 80 # service/examplehttpapp exposed kubectl get svc -o wide # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR # examplehttpapp ClusterIP 10.103.93.196 80/TCP 13s app=examplehttpapp # kubernetes ClusterIP 10.96.0.1 443/TCP 105m kubectl describe svc examplehttpapp # Name: examplehttpapp # Namespace: default # Labels: app=examplehttpapp # Annotations: # Selector: app=examplehttpapp # Type: ClusterIP # IP: 10.103.93.196 # Port: 80/TCP # TargetPort: 80/TCP # Endpoints: 10.44.0.2:80,10.44.0.4:80,10.44.0.5:80 # Session Affinity: None # Events: # But how does Kubernetes know where to send traffic? That is managed by Labels. # Each Object within Kubernetes can have a label attached, allowing Kubernetes to discover and use the configuration. kubectl get services -l app=examplehttpapp -o go-template='{{(index .items 0).spec.clusterIP}}' kubectl get services # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # examplehttpapp ClusterIP 10.103.93.196 80/TCP 2m32s # kubernetes ClusterIP 10.96.0.1 443/TCP 107m kubectl get services -o wide # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR # examplehttpapp ClusterIP 10.103.93.196 80/TCP 2m36s app=examplehttpapp # kubernetes ClusterIP 10.96.0.1 443/TCP 107m # kubectl logs to to view the logs for Pods kubectl logs $(kubectl get pods -l app=examplehttpapp -o go-template='{{(index .items 0).metadata.name}}') # Web Server started. Listening on 0.0.0.0:80 # view the CPU or Memory usage of a node or Pod kubectl top node # NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% # master 133m 3% 1012Mi 53% # node01 49m 1% 673Mi 17% kubectl top pod # NAME CPU(cores) MEMORY(bytes) # examplehttpapp-58f66848-ctnml 0m 0Mi # examplehttpapp-58f66848-gljk9 1m 0Mi # examplehttpapp-58f66848-hfqts 1m 0Mi }}} == Shows mapping between services and pods == {{{#!highlight sh kubectl get endpoints }}} == Show events ordered by last timestamp ascending == {{{#!highlight sh kubectl get events --sort-by='.lastTimestamp' }}}