= kubernetes = * https://www.katacoda.com/courses/kubernetes * minikube version # check version 1.2.0 * minikube start {{{#!highlight bash $ minikube version minikube version: v1.2.0 $ minikube start * minikube v1.2.0 on linux (amd64) * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5 - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf * Pulling images ... * Launching Kubernetes ... * Configuring local host environment ... * Verifying: apiserver proxy etcd scheduler controller dns * Done! kubectl is now configured to use "minikube" }}} == cluster details and health status == {{{#!highlight bash $ kubectl cluster-info Kubernetes master is running at https://172.17.0.30:8443 KubeDNS is running at https://172.17.0.30:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. }}} == get cluster nodes == {{{#!highlight bash $ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 3m1s v1.15.0 }}} == deploy containers == {{{#!highlight bash # deploy container $ kubectl create deployment first-deployment --image=katacoda/docker-http-server deployment.apps/first-deployment created $ # deploy container in cluster # check pods $ kubectl get pods NAME READY STATUS RESTARTS AGE first-deployment-8cbf74484-s2fkl 1/1 Running 0 25s # expose deployment $ kubectl expose deployment first-deployment --port=80 --type=NodePort service/first-deployment exposed $ kubectl get svc first-deployment NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE first-deployment NodePort 10.98.246.87 80:31219/TCP 105s # do request to port 80 in cluster ip $ curl 10.98.246.87:80

This request was processed by host: first-deployment-8cbf74484-s2fkl

$curl host01:31219

This request was processed by host: first-deployment-8cbf74484-s2fkl

}}} == dashboard == {{{#!highlight bash $ minikube addons enable dashboard #The Kubernetes dashboard allows you to view your applications in a UI. * dashboard was successfully enabled $ kubectl apply -f /opt/kubernetes-dashboard.yaml # only in katacoda service/kubernetes-dashboard-katacoda created # check progress $ kubectl get pods -n kube-system -w #check progress NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-b2kxm 1/1 Running 0 17m coredns-5c98db65d4-mm567 1/1 Running 1 17m etcd-minikube 1/1 Running 0 16m kube-addon-manager-minikube 1/1 Running 0 16m kube-apiserver-minikube 1/1 Running 0 16m kube-controller-manager-minikube 1/1 Running 0 16m kube-proxy-pngm9 1/1 Running 0 17m kube-scheduler-minikube 1/1 Running 0 16m kubernetes-dashboard-7b8ddcb5d6-xt5nt 1/1 Running 0 76s storage-provisioner 1/1 Running 0 17m ^C$ # dashboard url https://2886795294-30000-kitek05.environments.katacoda.com/ # how to launch a Single Node Kubernetes cluster. }}} == Init master == {{{#!highlight bash master $ kubeadm init --kubernetes-version $(kubeadm version -o short) [init] Using Kubernetes version: v1.14.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.69] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 16.503433 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system"Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: xfvno5.q2xfb2m3nw7grdjm [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approveCSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.17.0.69:6443 --token xfvno5.q2xfb2m3nw7grdjm \ --discovery-token-ca-cert-hash sha256:26d11c038d236967630d401747f210af9e3679fb1638e8b599a2da4cb98ab159 }}} {{{#!highlight bash master $ mkdir -p $HOME/.kube master $ pwd /root master $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config master $ sudo chown $(id -u):$(id -g) $HOME/.kube/config master $ export KUBECONFIG=$HOME/.kube/config master $ echo $KUBECONFIG/root/.kube/config }}} == deploy cni weaveworks - deploy a pod network to the cluster == Container Network Interface (CNI) defines how the different nodes and their workloads should communicate. Weave Net provides a network to connect all pods together, implementing the Kubernetes model. Kubernetes uses the Container Network Interface (CNI) to join pods onto Weave Net. {{{#!highlight bash master $ kubectl apply -f /opt/weave-kube serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.extensions/weave-net created master $ kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-fb8b8dccf-b9rd7 1/1 Running 0 11m coredns-fb8b8dccf-sfgbn 1/1 Running 0 11m etcd-master 1/1 Running 0 10m kube-apiserver-master 1/1 Running 0 10m kube-controller-manager-master 1/1 Running 0 10m kube-proxy-l42wp 1/1 Running 0 11m kube-scheduler-master 1/1 Running 1 10m weave-net-mcxml 2/2 Running 0 84s }}} == join cluster == {{{#!highlight bash master $ kubeadm token list # check tokens TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS xfvno5.q2xfb2m3nw7grdjm 23h 2019-07-28T16:19:18Z authentication,signing The default bootstrap token generated b y 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token }}} {{{#!highlight bash # in node01 # join cluster kubeadm join --discovery-token-unsafe-skip-ca-verification --token=xfvno5.q2xfb2m3nw7grdjm 172.17.0.69:6443 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. # The --discovery-token-unsafe-skip-ca-verification tag is used to bypass the Discovery Token verification. # in master master $ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 17m v1.14.0 node01 Ready 107s v1.14.0 bootstrap token generated b master $ # in node01 node01 $ kubectl get nodesThe connection to the server localhost:8080 was refused - did you specify the right host or port ? node01 $ }}} == deploy container in cluster == {{{#!highlight bash master $ kubectl create deployment http --image=katacoda/docker-http-server:latest deployment.apps/http created master $ kubectl get pods NAME READY STATUS RESTARTS AGE http-7f8cbdf584-74pd9 1/1 Running 0 11s master $ docker ps | grep http-server master $ node01 $ docker ps | grep http-serveradb3cde7f861 katacoda/docker-http-server "/app" About a minute ago Up About a minute k8s_docker-http-server_http-7f8cbdf584-74pd9_default_04a 17065-b08d-11e9-bff1-0242ac110045_0 # expose deployment master $ kubectl get pods NAME READY STATUS RESTARTS AGE http-7f8cbdf584-74pd9 1/1 Running 0 17m bootstrap token generated b master $ kubectl expose deployment http --port=80 --type=NodePort service/http exposed master $ kubectl get service http NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http NodePort 10.101.65.149 80:30982/TCP 49s master $ curl 10.101.65.149:80

This request was processed by host: http-7f8cbdf584-74pd9

}}} == apply dashboard in cluster == {{{ master $ kubectl apply -f dashboard.yamlsecret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard createdrole.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard createdservice/kubernetes-dashboard createdmaster $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGEcoredns-fb8b8dccf-b9rd7 1/1 Running 0 42mcoredns-fb8b8dccf-sfgbn 1/1 Running 0 42m etcd-master 1/1 Running 0 41m kube-apiserver-master 1/1 Running 0 40m kube-controller-manager-master 1/1 Running 0 40m kube-proxy-gwrps 1/1 Running 0 26m kube-proxy-l42wp 1/1 Running 0 42m kube-scheduler-master 1/1 Running 1 40m kubernetes-dashboard-5f57845f9d-ls7q2 0/1 ContainerCreating 0 2s weave-net-gww8b 2/2 Running 0 26m weave-net-mcxml 2/2 Running 0 31m }}} Create service accoun for dashboard {{{ cat <