= kubernetes = * https://www.katacoda.com/courses/kubernetes * deploy docker images in containers. Cluster has nodes. Nodes has pods/containers. Each cluster might correspond to a service. * Ingress, An API object that manages external access to the services in a cluster, typically HTTP. Ideas like reverse-proxy and load balancer. * minikube version # check version 1.2.0 * minikube start {{{#!highlight bash minikube version # minikube version: v1.2.0 minikube start # * minikube v1.2.0 on linux (amd64) # * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... # * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5 # - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf # * Pulling images ... # * Launching Kubernetes ... # # * Configuring local host environment ... # * Verifying: apiserver proxy etcd scheduler controller dns # * Done! kubectl is now configured to use "minikube" }}} == cluster details and health status == {{{#!highlight bash kubectl cluster-info # Kubernetes master is running at https://172.17.0.30:8443 # KubeDNS is running at https://172.17.0.30:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy # # To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. }}} == get cluster nodes == {{{#!highlight bash kubectl get nodes # NAME STATUS ROLES AGE VERSION # minikube Ready master 3m1s v1.15.0 }}} == Deploy containers == {{{#!highlight bash # deploy container kubectl create deployment first-deployment --image=katacoda/docker-http-server # deployment.apps/first-deployment created # deploy container in cluster # check pods kubectl get pods # NAME READY STATUS RESTARTS AGE # first-deployment-8cbf74484-s2fkl 1/1 Running 0 25s # expose deployment kubectl expose deployment first-deployment --port=80 --type=NodePort # service/first-deployment exposed kubectl get svc first-deployment # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # first-deployment NodePort 10.98.246.87 80:31219/TCP 105s # do request to port 80 in cluster ip curl 10.98.246.87:80 #

This request was processed by host: first-deployment-8cbf74484-s2fkl

# curl host01:31219 #

This request was processed by host: first-deployment-8cbf74484-s2fkl

}}} == dashboard == {{{#!highlight bash minikube addons enable dashboard #The Kubernetes dashboard allows you to view your applications # in a UI. # * dashboard was successfully enabled kubectl apply -f /opt/kubernetes-dashboard.yaml # only in katacoda # service/kubernetes-dashboard-katacoda created # check progress kubectl get pods -n kube-system -w #check progress # NAME READY STATUS RESTARTS AGE # coredns-5c98db65d4-b2kxm 1/1 Running 0 17m # coredns-5c98db65d4-mm567 1/1 Running 1 17m # etcd-minikube 1/1 Running 0 16m # kube-addon-manager-minikube 1/1 Running 0 16m # kube-apiserver-minikube 1/1 Running 0 16m # kube-controller-manager-minikube 1/1 Running 0 16m # kube-proxy-pngm9 1/1 Running 0 17m # kube-scheduler-minikube 1/1 Running 0 16m # kubernetes-dashboard-7b8ddcb5d6-xt5nt 1/1 Running 0 76s # storage-provisioner 1/1 Running 0 17m # dashboard url https://2886795294-30000-kitek05.environments.katacoda.com/ # how to launch a Single Node Kubernetes cluster. }}} == Init master == {{{#!highlight bash master $ kubeadm init --kubernetes-version $(kubeadm version -o short) [init] Using Kubernetes version: v1.14.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.69] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 16.503433 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system"Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: xfvno5.q2xfb2m3nw7grdjm [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approveCSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.17.0.69:6443 --token xfvno5.q2xfb2m3nw7grdjm \ --discovery-token-ca-cert-hash sha256:26d11c038d236967630d401747f210af9e3679fb1638e8b599a2da4cb98ab159 }}} {{{#!highlight bash # In master mkdir -p $HOME/.kube pwd # /root sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=$HOME/.kube/config echo $KUBECONFIG/root/.kube/config }}} == Deploy cni weaveworks - deploy a pod network to the cluster == Container Network Interface (CNI) defines how the different nodes and their workloads should communicate. Weave Net provides a network to connect all pods together, implementing the Kubernetes model. Kubernetes uses the Container Network Interface (CNI) to join pods onto Weave Net. {{{#!highlight bash # In master kubectl apply -f /opt/weave-kube # serviceaccount/weave-net created # clusterrole.rbac.authorization.k8s.io/weave-net created # clusterrolebinding.rbac.authorization.k8s.io/weave-net created # role.rbac.authorization.k8s.io/weave-net created # rolebinding.rbac.authorization.k8s.io/weave-net created # daemonset.extensions/weave-net created kubectl get pod -n kube-system # NAME READY STATUS RESTARTS AGE # coredns-fb8b8dccf-b9rd7 1/1 Running 0 11m # coredns-fb8b8dccf-sfgbn 1/1 Running 0 11m # etcd-master 1/1 Running 0 10m # kube-apiserver-master 1/1 Running 0 10m # kube-controller-manager-master 1/1 Running 0 10m # kube-proxy-l42wp 1/1 Running 0 11m # kube-scheduler-master 1/1 Running 1 10m # weave-net-mcxml 2/2 Running 0 84s }}} == Join cluster == {{{#!highlight bash # In master kubeadm token list # check tokens # TOKEN TTL EXPIRES USAGES DESCRIPTION # EXTRA GROUPS # xfvno5.q2xfb2m3nw7grdjm 23h 2019-07-28T16:19:18Z authentication,signing The default bootstrap # token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token }}} {{{#!highlight bash # in node01 # join cluster kubeadm join --discovery-token-unsafe-skip-ca-verification --token=xfvno5.q2xfb2m3nw7grdjm 172.17.0.69:6443 # [preflight] Running pre-flight checks # [preflight] Reading configuration from the cluster... # [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' # [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace # [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" # [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" # [kubelet-start] Activating the kubelet service # [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... # # This node has joined the cluster: # * Certificate signing request was sent to apiserver and a response was received. # * The Kubelet was informed of the new secure connection details. # # Run 'kubectl get nodes' on the control-plane to see this node join the cluster. # The --discovery-token-unsafe-skip-ca-verification tag is used to bypass the Discovery Token verification. # in master kubectl get nodes # NAME STATUS ROLES AGE VERSION # master Ready master 17m v1.14.0 # node01 Ready 107s v1.14.0 # bootstrap token generated b # in node01 kubectl get nodes # The connection to the server localhost:8080 was refused - did you specify the right host or port }}} == Deploy container in cluster == {{{#!highlight bash # In master kubectl create deployment http --image=katacoda/docker-http-server:latest # deployment.apps/http created kubectl get pods # NAME READY STATUS RESTARTS AGE # http-7f8cbdf584-74pd9 1/1 Running 0 11s docker ps | grep http-server # In node01 docker ps | grep http-serveradb3cde7f861 # katacoda/docker-http-server "/app" # About a minute ago # Up About a minute k8s_docker-http-server_http-7f8cbdf584-74pd9_default_04a # 17065-b08d-11e9-bff1-0242ac110045_0 # expose deployment in master kubectl get pods # NAME READY STATUS RESTARTS AGE # http-7f8cbdf584-74pd9 1/1 Running 0 17m bootstrap # token generated b kubectl expose deployment http --port=80 --type=NodePort # service/http exposed kubectl get service http # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # http NodePort 10.101.65.149 80:30982/TCP 49s curl 10.101.65.149:80 #

This request was processed by host: http-7f8cbdf584-74pd9

curl http://10.101.65.149 #

This request was processed by host: http-7f8cbdf584-74pd9

}}} == Apply dashboard in cluster == * Dashboard General-purpose web UI for Kubernetes clusters Dashboard Version: v1.10.0 {{{#!highlight sh # In master kubectl apply -f dashboard.yaml # secret/kubernetes-dashboard-certs created # serviceaccount/kubernetes-dashboard created # role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created # rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created # deployment.apps/kubernetes-dashboard created # service/kubernetes-dashboard created kubectl get pods -n kube-system # NAME READY STATUS RESTARTS AGEcoredns-fb8b8dccf-b9rd7 # 1/1 Running 0 42mcoredns-fb8b8dccf-sfgbn 1/1 Running # 0 42m # etcd-master 1/1 Running 0 41m # kube-apiserver-master 1/1 Running 0 40m # kube-controller-manager-master 1/1 Running 0 40m # kube-proxy-gwrps 1/1 Running 0 26m # kube-proxy-l42wp 1/1 Running 0 42m # kube-scheduler-master 1/1 Running 1 40m # kubernetes-dashboard-5f57845f9d-ls7q2 0/1 ContainerCreating 0 2s # weave-net-gww8b 2/2 Running 0 26m # weave-net-mcxml 2/2 Running 0 31m }}} Create service account for dashboard {{{#!highlight yaml cat < 80:30982/TCP 17m # kubernetes ClusterIP 10.96.0.1 443/TCP 56m }}} == Start containers using Kubectl == {{{#!highlight bash minikube start # start kubernetes cluster and its components # * minikube v1.2.0 on linux (amd64) # * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... # * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5 # - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf # * Pulling images ... # * Launching Kubernetes ... # * Configuring local host environment ... # * Verifying: apiserver proxy etcd # # scheduler controller dns # * Done! kubectl is now configured to use "minikube" kubectl get nodes # NAME STATUS ROLES AGE VERSION # minikube Ready master 2m2s v1.15.0 kubectl get service # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # kubernetes ClusterIP 10.96.0.1 443/TCP 2m18s # This deployment is issued to the Kubernetes master which launches the Pods and containers required. Kubectl run_ is similar to docker run but at a cluster level. # launch a deployment called http which will start a container based on the Docker Image katacoda/docker-http-server:latest. kubectl run http --image=katacoda/docker-http-server:latest --replicas=1 # kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version.Use kubectl run --generator=run-pod/v1 or kubectl create instead. # deployment.apps/http created kubectl get deployments # NAME READY UP-TO-DATE AVAILABLE AGE # http 1/1 1 1 6s # you can describe the deployment process. kubectl describe deployment http # expose the container port 80 on the host 8000 binding to the external-ip of the host. kubectl expose deployment http --external-ip="172.17.0.13" --port=8000 --target-port=80 # service/http exposed curl http://172.17.0.13:8000 #

This request was processed by host: http-5fcf9dd9cb-zfkkz

kubectl get pods # NAME READY STATUS RESTARTS AGE # http-5fcf9dd9cb-zfkkz 1/1 Running 0 3m26s kubectl get service # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # http ClusterIP 10.100.157.159 172.17.0.13 8000/TCP 57s # kubernetes ClusterIP 10.96.0.1 443/TCP 7m41s curl http://10.100.157.159:8000 #

This request was processed by host: http-5fcf9dd9cb-zfkkz

kubectl run httpexposed --image=katacoda/docker-http-server:latest --replicas=1 --port=80 --host port=8001 # kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. # Use kubectl run --generator=run-pod/v1 or kubectl create instead. # deployment.apps/httpexposed created curl http://172.17.0.13:8001 #

This request was processed by host: httpexposed-569df5d86-rzzhb

kubectl get svc # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # http ClusterIP 10.100.157.159 172.17.0.13 8000/TCP 3m50s # kubernetes ClusterIP 10.96.0.1 443/TCP 10m kubectl get pods # NAME READY STATUS RESTARTS AGE # http-5fcf9dd9cb-zfkkz 1/1 Running 0 7m9s # httpexposed-569df5d86-rzzhb 1/1 Running 0 36s # Scaling the deployment will request Kubernetes to launch additional Pods. kubectl scale --replicas=3 deployment http # deployment.extensions/http scaled kubectl get pods # amount of pods for service http increased to 3 # NAME READY STATUS RESTARTS AGE # http-5fcf9dd9cb-fhljh 1/1 Running 0 31s # http-5fcf9dd9cb-wb2dh 1/1 Running 0 31s # http-5fcf9dd9cb-zfkkz 1/1 Running 0 9m27s # httpexposed-569df5d86-rzzhb 1/1 Running 0 2m54s # Once each Pod starts it will be added to the load balancer service. kubectl get service # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # http ClusterIP 10.100.157.159 172.17.0.13 8000/TCP 7m28s # kubernetes ClusterIP 10.96.0.1 443/TCP 14m kubectl describe svc http # Name: http # Namespace: defaultLabels: run=httpAnnotations: # Selector: run=httpType: ClusterIPIP: 10.100.157.159 # External IPs: 172.17.0.13 # Port: 8000/TCP # TargetPort: 80/TCP # Endpoints: 172.18.0.4:80,172.18.0.6:80,172.18.0.7:80 # Session Affinity: None # Events: curl http://172.17.0.13:8000 #

This request was processed by host: http-5fcf9dd9cb-wb2dh

curl http://172.17.0.13:8000 #

This request was processed by host: http-5fcf9dd9cb-fhljh

curl http://172.17.0.13:8000 #

This request was processed by host: http-5fcf9dd9cb-zfkkz

}}} == Certified kubernetes application developer == * https://www.katacoda.com/courses/kubernetes/first-steps-to-ckad-certification {{{#!highlight bash # In master launch.sh # Waiting for Kubernetes to start... # Kubernetes started kubectl get nodes # NAME STATUS ROLES AGE VERSION # master Ready master 85m v1.14.0 # node01 Ready 85m v1.14.0 # deploy app kubectl create deployment examplehttpapp --image=katacoda/docker-http-server # deployment.apps/examplehttpapp created # view all deployments kubectl get deployments # NAME READY UP-TO-DATE AVAILABLE AGE # examplehttpapp 1/1 1 1 25s # A deployment will launch a set of Pods. A pod is a group of one or more containers deployed across the cluster. kubectl get pods # NAME READY STATUS RESTARTS AGE # examplehttpapp-58f66848-n7wn7 1/1 Running 0 71s # show pod ip and node where it is kubectl get pods -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE # READINESS GATES # examplehttpapp-58f66848-n7wn7 1/1 Running 0 113s 10.44.0.2 node01 # describe pod kubectl describe pod examplehttpapp-58f66848-n7wn7 # Name: examplehttpapp-58f66848-n7wn7 # Namespace: default # Priority: 0 # PriorityClassName: # Node: node01/172.17.0.24 # Start Time: Sat, 27 Jul 2019 17:59:35 +0000 # Labels: app=examplehttpapp # pod-template-hash=58f66848 # Annotations: # Status: Running # IP: 10.44.0.2 # List all namespaces with in the cluster with kubectl get namespaces kubectl get ns # The namespaces can be used to filter queries to the available objects. kubectl get pods -n kube-system kubectl create ns testns # namespace/testns created kubectl create deployment namespacedeg -n testns --image=katacoda/docker-http-server # deployment.apps/namespacedeg created kubectl get pods -n testns # NAME READY STATUS RESTARTS AGE # namespacedeg-74dcc7dc64-wcxnj 1/1 Running 0 3s kubectl get pods -n testns -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES # namespacedeg-74dcc7dc64-wcxnj 1/1 Running 0 18s 10.44.0.3 node01 # Kubectl can help scale the number of Pods running for a deployment, referred to as replicas. kubectl scale deployment examplehttpapp --replicas=5 # deployment.extensions/examplehttpapp scaled kubectl get deployments -o wide # NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR # examplehttpapp 5/5 5 5 9m6s docker-http-server katacoda/docker-http-server app=examplehttpapp kubectl get pods -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES # examplehttpapp-58f66848-cf6pl 1/1 Running 0 65s 10.44.0.6 node01 # examplehttpapp-58f66848-lfrq4 1/1 Running 0 65s 10.44.0.5 node01 # examplehttpapp-58f66848-n7wn7 1/1 Running 0 9m26s 10.44.0.2 node01 # examplehttpapp-58f66848-snwl7 1/1 Running 0 65s 10.44.0.7 node01 # examplehttpapp-58f66848-vd8db 1/1 Running 0 65s 10.44.0.4 node01 # everything within Kubernetes is controllable as YAML. kubectl edit deployment examplehttpapp # opens vi/vim, changing the spec.replicas to 10 after saving the file increased the number of pods kubectl get nodes -o wide # NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME # master Ready master 102m v1.14.0 172.17.0.19 Ubuntu 16.04.6 LTS 4.4.0-150-generic docker://18.9.5 # node01 Ready 102m v1.14.0 172.17.0.24 Ubuntu 16.04.6 LTS 4.4.0-150-generic docker://18.9.5 # Image can be changed using the set image command, rollout # apply new image to the pods/containers kubectl --record=true set image deployment examplehttpapp docker-http-server=katacoda/docker-http-server:v2 # deployment.extensions/examplehttpapp image updated kubectl rollout status deployment examplehttpapp # Waiting for deployment "examplehttpapp" rollout to finish: 1 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 1 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 1 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 2 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 2 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 2 out of 3 new replicas have been updated... # Waiting for deployment "examplehttpapp" rollout to finish: 1 old replicas are pending termination... # Waiting for deployment "examplehttpapp" rollout to finish: 1 old replicas are pending termination... deployment "examplehttpapp" successfully rolled out # rollback deployment kubectl rollout undo deployment examplehttpapp # deployment.extensions/examplehttpapp rolled back # The expose command will create a new service for a deployment. The port specifies the port of the application we want to available kubectl expose deployment examplehttpapp --port 80 # service/examplehttpapp exposed kubectl get svc -o wide # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR # examplehttpapp ClusterIP 10.103.93.196 80/TCP 13s app=examplehttpapp # kubernetes ClusterIP 10.96.0.1 443/TCP 105m kubectl describe svc examplehttpapp # Name: examplehttpapp # Namespace: default # Labels: app=examplehttpapp # Annotations: # Selector: app=examplehttpapp # Type: ClusterIP # IP: 10.103.93.196 # Port: 80/TCP # TargetPort: 80/TCP # Endpoints: 10.44.0.2:80,10.44.0.4:80,10.44.0.5:80 # Session Affinity: None # Events: # But how does Kubernetes know where to send traffic? That is managed by Labels. # Each Object within Kubernetes can have a label attached, allowing Kubernetes to discover and use the configuration. kubectl get services -l app=examplehttpapp -o go-template='{{(index .items 0).spec.clusterIP}}' kubectl get services # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # examplehttpapp ClusterIP 10.103.93.196 80/TCP 2m32s # kubernetes ClusterIP 10.96.0.1 443/TCP 107m kubectl get services -o wide # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR # examplehttpapp ClusterIP 10.103.93.196 80/TCP 2m36s app=examplehttpapp # kubernetes ClusterIP 10.96.0.1 443/TCP 107m # kubectl logs to to view the logs for Pods kubectl logs $(kubectl get pods -l app=examplehttpapp -o go-template='{{(index .items 0).metadata.name}}') # Web Server started. Listening on 0.0.0.0:80 # view the CPU or Memory usage of a node or Pod kubectl top node # NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% # master 133m 3% 1012Mi 53% # node01 49m 1% 673Mi 17% kubectl top pod # NAME CPU(cores) MEMORY(bytes) # examplehttpapp-58f66848-ctnml 0m 0Mi # examplehttpapp-58f66848-gljk9 1m 0Mi # examplehttpapp-58f66848-hfqts 1m 0Mi }}} == k3s - Lightweight Kubernetes == * https://k3s.io/ * https://github.com/rancher/k3s/releases/tag/v1.17.0+k3s.1 * https://rancher.com/docs/k3s/latest/en/ * https://rancher.com/docs/k3s/latest/en/quick-start/ K3S works great from something as small as a Raspberry Pi to an AWS a1.4xlarge 32GiB server. Download k3s - latest release, x86_64, ARMv7, and ARM64 are supported Situations where a PhD in k8s clusterology is infeasible {{{#!highlight bash curl -sfL https://get.k3s.io | sh - sudo curl -sfL https://get.k3s.io | sh - # [INFO] Finding latest release # [INFO] Using v1.17.0+k3s.1 as release # [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.17.0+k3s.1/sha256sum-amd64.txt # [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.17.0+k3s.1/k3s # [INFO] Verifying binary download # [INFO] Installing k3s to /usr/local/bin/k3s # [INFO] Creating /usr/local/bin/kubectl symlink to k3s # [INFO] Creating /usr/local/bin/crictl symlink to k3s # [INFO] Creating /usr/local/bin/ctr symlink to k3s # [INFO] Creating killall script /usr/local/bin/k3s-killall.sh # [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh # [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env # [INFO] systemd: Creating service file /etc/systemd/system/k3s.service # [INFO] systemd: Enabling k3s unit # Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. # [INFO] systemd: Starting k3s # as root k3s kubectl cluster-info kubectl create deployment springboot-test --image=vbodocker/springboot-test:latest kubectl expose deployment springboot-test --port=8000 --target-port=8080 --type=NodePort kubectl get services IP_SPRINGBOOT=$(kubectl get services | grep springboot | awk '//{print $3}') curl http://$IP_SPRINGBOOT:8000/dummy # list containerd images and containers k3s crictl images k3s crictl ps # connect to container id crictl exec -it 997a2ad8c763a sh # connect to container/pod kubectl get pods kubectl exec -it springboot-test-6bb5fdfc48-phh8k sh cat /etc/os-release # alpine linux in container # give sudo rights to user /sbin/usermod -aG sudo user # scale pods sudo kubectl scale deployment springboot-test --replicas=3 sudo kubectl get pods -o wide # add mariadb pod/service sudo kubectl create deployment mariadb-test --image=mariadb:latest sudo kubectl get pods -o wide sudo kubectl delete deployment mariadb-test # https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/ sudo kubectl apply -f mariadb-pv.yaml #persistentvolume/mariadb-pv-volume created #persistentvolumeclaim/mariadb-pv-claim created sudo kubectl apply -f mariadb-deployment.yaml #service/mariadb created #deployment.apps/mariadb created sudo kubectl describe deployment mariadb sudo kubectl get svc -o wide # connect to mariabdb pod sudo kubectl exec -it mariadb-8578f4dc8c-r4ftv /bin/bash ss -atn # show ports tcp listening ip address # show ip addresses mysql -h localhost -p mysql -u root -h 10.42.0.12 -p # delete service, persistent volume claim and persistent volume sudo kubectl delete deployment,svc mariadb sudo kubectl delete pvc mariadb-pv-claim sudo kubectl delete pv mariadb-pv-volume }}} === mariadb-pv.yaml === {{{#!highlight yaml apiVersion: v1 kind: PersistentVolume metadata: name: mariadb-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mariadb-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi }}} === mariadb-deployment.yaml === {{{#!highlight yaml apiVersion: v1 kind: Service metadata: name: mariadb spec: ports: - port: 3306 selector: app: mariadb clusterIP: None --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: mariadb spec: selector: matchLabels: app: mariadb strategy: type: Recreate template: metadata: labels: app: mariadb spec: containers: - image: mariadb:latest name: mariadb env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mariadb volumeMounts: - name: mariadb-persistent-storage mountPath: /var/lib/mariadb volumes: - name: mariadb-persistent-storage persistentVolumeClaim: claimName: mariadb-pv-claim }}} == Init containers == * https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Init containers can contain utilities or setup scripts not present in an app image. == systemctl commands == {{{#!highlight bash systemctl start k3s systemctl stop k3s systemctl status k3s systemctl disable k3s.service systemctl enable k3s }}} == Ubuntu pod == === ubuntu.yaml === {{{#!highlight yaml apiVersion: v1 kind: Pod metadata: name: ubuntu labels: app: ubuntu spec: containers: - name: ubuntu image: ubuntu:latest command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent restartPolicy: Always }}} {{{#!highlight bash sudo kubectl apply -f ubuntu.yaml sudo kubectl get pods sudo kubectl exec -it ubuntu -- bash sudo kubectl delete pod ubuntu }}} == Alpine pod == === alpine.yaml === {{{#!highlight yaml apiVersion: v1 kind: Pod metadata: name: alpine labels: app: alpine spec: containers: - name: alpine image: alpine:latest command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent restartPolicy: Always }}} {{{#!highlight bash # Pods use PersistentVolumeClaims to request physical storage. sudo kubectl apply -f alpine.yaml sudo kubectl exec -it alpine -- sh }}} == Nginx with persistent volume == {{{#!highlight bash cd /tmp mkdir -p /tmp/data echo 'Hello from Kubernetes storage' > /tmp/data/index.html }}} === pv-volume.yaml === {{{#!highlight yaml apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 0.2Gi accessModes: - ReadWriteOnce hostPath: path: "/tmp/data" }}} === pv-claim.yaml === {{{#!highlight yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 0.2Gi }}} === pv-pod.yaml === {{{#!highlight yaml apiVersion: v1 kind: Pod metadata: name: task-pv-pod spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage }}} {{{#!highlight bash sudo kubectl apply -f pv-volume.yaml sudo kubectl apply -f pv-claim.yaml sudo kubectl apply -f pv-pod.yaml sudo kubectl get pods -o wide curl http://10.42.0.28/ sudo kubectl exec -it task-pv-pod -- bash cd /usr/share/nginx/html echo "Hey from Kubernetes storage" > index.html cat /etc/os-release # debian buster kubectl delete pod task-pv-pod kubectl delete pvc task-pv-claim kubectl delete pv task-pv-volume cat /tmp/data/index.html }}} == Generate yaml == {{{#!highlight sh sudo kubectl create deployment cherrypy-test --image=vbodocker/cherrypy-test --dry-run=client --output=yaml sudo kubectl expose deployment cherrypy-test --port=8080 --type=NodePort --dry-run=client --output=yaml sudo kubectl scale deployment cherrypy-test --replicas=3 --dry-run=client --output=yaml }}} == Alpine persistent volume == === alpine-shared.yaml === {{{#!highlignt yaml --- apiVersion: v1 kind: PersistentVolume metadata: name: alpine-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 0.2Gi accessModes: - ReadWriteOnce hostPath: path: "/tmp/alpine-data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: alpine-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 0.2Gi --- apiVersion: v1 kind: Pod metadata: name: alpine-pod labels: app: alpine-pod spec: volumes: - name: alpine-pv-storage persistentVolumeClaim: claimName: alpine-pv-claim containers: - name: alpine image: alpine:latest command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent volumeMounts: - mountPath: "/mnt/alpine/data" name: alpine-pv-storage restartPolicy: Always }}} {{{#!highlight bash sudo kubectl apply -f alpine-shared.yaml sudo kubectl exec -it alpine-pod -- sh /mnt/alpine/data # echo "teste" > x.txt # inside pod cat /tmp/alpine-data/x.txt # k8s host }}} == MariaDB + NFS == {{{#!highlight sh /vol *(rw,sync,insecure,fsid=0,no_subtree_check,no_root_squash) exportfs -rav exporting *:/vol mkdir -p /vol/mariadb-0 kubectl apply -f mariadb-nfs.yaml kubectl exec -it mariadb-79847f5d97-smbdx -- bash touch /var/lib/mariadb/b mount | grep nfs kubectl delete -f mariadb-nfs.yaml kubectl get pods kubectl get pvc kubectl get pv }}} === mariadb-nfs.yaml === {{{#!highlight yaml --- apiVersion: v1 kind: PersistentVolume metadata: name: mdb-vol-0 labels: volume: mdb-volume spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteOnce nfs: server: 127.0.0.1 path: "/vol/mariadb-0" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mdb-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: v1 kind: Service metadata: name: mariadb spec: ports: - port: 3306 selector: app: mariadb clusterIP: None --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: mariadb spec: selector: matchLabels: app: mariadb strategy: type: Recreate template: metadata: labels: app: mariadb spec: containers: - image: mariadb:latest name: mariadb env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mariadb volumeMounts: - name: mdb-persistent-storage mountPath: /var/lib/mariadb volumes: - name: mdb-persistent-storage persistentVolumeClaim: claimName: mdb-pv-claim }}} == Persistent volumes == * https://kubernetes.io/docs/concepts/storage/persistent-volumes/ A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. A PersistentVolumeClaim (PVC) is a request for storage by a user. Pods consume node resources and PVCs consume PV resources. Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes). Types of Persistent Volumes: * local - local storage devices mounted on nodes. * nfs - Network File System (NFS) storage == Ingress controller nginx example == === ingress-cherrypy-test.yml === {{{#!highlight yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cherrypy-test spec: rules: - host: cp.info http: paths: - path: / pathType: Prefix backend: service: name: cherrypy-test port: number: 8000 ingressClassName: nginx }}} === Steps === {{{#!highlight bash # install k3s curl -sfL https://get.k3s.io | sh - KUBECONFIG=~/.kube/config mkdir ~/.kube 2> /dev/null sudo k3s kubectl config view --raw > "$KUBECONFIG" chmod 600 "$KUBECONFIG" nano ~/.bashrc export KUBECONFIG=~/.kube/config source . ~/.bashrc sudo nano /etc/systemd/system/k3s.service ExecStart=/usr/local/bin/k3s server --write-kubeconfig-mode=644 sudo systemctl daemon-reload sudo service k3s start sudo service k3s status kubectl get pods k3s kubectl cluster-info kubectl -n kube-system delete helmcharts.helm.cattle.io traefik sudo service k3s stop sudo nano /etc/systemd/system/k3s.service # ExecStart=/usr/local/bin/k3s server --write-kubeconfig-mode=644 --no-deploy traefik sudo systemctl daemon-reload sudo rm /var/lib/rancher/k3s/server/manifests/traefik.yaml sudo service k3s start kubectl -n kube-system delete helmcharts.helm.cattle.io traefik sudo systemctl restart k3s kubectl get nodes kubectl delete node localhost kubectl get pods --all-namespaces kubectl get services --all-namespaces kubectl get deployment --all-namespaces # install nginx ingress controller kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml kubectl get pods --namespace=ingress-nginx kubectl create deployment cherrypy-test --image=vbodocker/cherrypy-test kubectl expose deployment cherrypy-test --port=8000 --target-port=8080 --type=ClusterIP # cluster ip port 8000 kubectl get services kubectl apply -f ingress-cherrypy-test.yml EXTERNAL_IP=$(ip addr show | grep wlp | grep inet | awk '//{print $2}' | sed 's/\// /g' | awk '//{print $1}') echo $EXTERNAL_IP sudo sh -c " echo '$EXTERNAL_IP cp.info' >> /etc/hosts " kubectl get ingress curl cp.info kubectl scale deployment cherrypy-test --replicas=5 curl http://cp.info/ -vvv sudo apt install apache2-utils ab -n 10 -c 10 http://cp.info/ # Push image to docker hub docker build -t vbodocker/cherrypy-test . docker run -p 8080:8080 vbodocker/cherrypy-test docker login # login to docker hub docker push vbodocker/cherrypy-test docker pull vbodocker/cherrypy-test:latest # Rollout, deploy new image kubectl get deployments -o wide # shows image urls kubectl rollout restart deployment cherrypy-test # redeploy image url for cherrypy-test kubectl rollout status deployment cherrypy-test kubectl get deployments -o wide kubectl get pods -o wide # age should be low for the newly deployed pods }}} == Install k3s static binary in Slack64 == * https://github.com/k3s-io/k3s#k3s---lightweight-kubernetes * Binaries available in https://github.com/k3s-io/k3s#manual-download * wget https://github.com/k3s-io/k3s/releases/download/v1.25.3%2Bk3s1/k3s {{{#!highlight sh sudo mv ~/Downloads/k3s /usr/bin/ sudo chmod 744 /usr/bin/k3s }}} === /etc/rc.d/rc.k3s === {{{#!highlight sh #!/bin/sh PATH=$PATH:/usr/sbin k3s_start() { /usr/bin/k3s server --write-kubeconfig-mode=644 \ --disable traefik > /var/log/k3s.log 2>&1 & } k3s_stop() { kill $(ps uax | grep "/usr/bin/k3s" | head -1 | awk '//{print $2}') ps uax | grep containerd | awk '//{print $2}' | xargs -i kill {} } k3s_restart() { k3s_stop k3s_start } case "$1" in 'start') k3s_start ;; 'stop') k3s_stop ;; 'restart') k3s_restart ;; *) echo "usage $0 start|stop|restart" esac }}} === ingress-cherrypy-test.yml === {{{#!highlight yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cherrypy-test spec: rules: - host: cp.info http: paths: - path: / pathType: Prefix backend: service: name: cherrypy-test port: number: 8000 ingressClassName: nginx }}} === Steps === {{{#!highlight sh echo "alias kubectl='/usr/bin/k3s kubectl'" >> ~/.bashrc source ~/.bashrc sudo sh /etc/rc.d/rc.k3s start kubectl get nodes kubectl get deployments --all-namespaces kubectl get services --all-namespaces kubectl get pods --all-namespaces kubectl cluster-info # install nginx ingress controller kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml # wait for nginx ingress controller to finish sleep 120 kubectl create deployment cherrypy-test --image=vbodocker/cherrypy-test kubectl expose deployment cherrypy-test --port=8000 --target-port=8080 --type=ClusterIP kubectl get pods --all-namespaces kubectl get services --all-namespaces kubectl apply -f ingress-cherrypy-test.yml EXTERNAL_IP=$(/sbin/ip addr show | grep wl | grep inet | awk '//{print $2}' | sed 's/\// /g' | awk '//{print $1}') echo $EXTERNAL_IP sudo sh -c " echo '$EXTERNAL_IP cp.info' >> /etc/hosts " cat /etc/hosts kubectl get ingress curl cp.info }}}