Size: 127
Comment:
|
Size: 11586
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 5: | Line 5: |
{{{#!highlight bash $ minikube version minikube version: v1.2.0 $ minikube start * minikube v1.2.0 on linux (amd64) * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5 - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf * Pulling images ... * Launching Kubernetes ... * Configuring local host environment ... * Verifying: apiserver proxy etcd scheduler controller dns * Done! kubectl is now configured to use "minikube" }}} == cluster details and health status == {{{#!highlight bash $ kubectl cluster-info Kubernetes master is running at https://172.17.0.30:8443 KubeDNS is running at https://172.17.0.30:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. }}} == get cluster nodes == {{{#!highlight bash $ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 3m1s v1.15.0 }}} == deploy containers == {{{ # deploy container $ kubectl create deployment first-deployment --image=katacoda/docker-http-server deployment.apps/first-deployment created $ # deploy container in cluster # check pods $ kubectl get pods NAME READY STATUS RESTARTS AGE first-deployment-8cbf74484-s2fkl 1/1 Running 0 25s # expose deployment $ kubectl expose deployment first-deployment --port=80 --type=NodePort service/first-deployment exposed $ kubectl get svc first-deployment NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE first-deployment NodePort 10.98.246.87 <none> 80:31219/TCP 105s # do request to port 80 in cluster ip $ curl 10.98.246.87:80 <h1>This request was processed by host: first-deployment-8cbf74484-s2fkl</h1> $curl host01:31219 <h1>This request was processed by host: first-deployment-8cbf74484-s2fkl</h1> }}} == dashboard == {{{ $ minikube addons enable dashboard #The Kubernetes dashboard allows you to view your applications in a UI. * dashboard was successfully enabled $ kubectl apply -f /opt/kubernetes-dashboard.yaml # only in katacoda service/kubernetes-dashboard-katacoda created # check progress $ kubectl get pods -n kube-system -w #check progress NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-b2kxm 1/1 Running 0 17m coredns-5c98db65d4-mm567 1/1 Running 1 17m etcd-minikube 1/1 Running 0 16m kube-addon-manager-minikube 1/1 Running 0 16m kube-apiserver-minikube 1/1 Running 0 16m kube-controller-manager-minikube 1/1 Running 0 16m kube-proxy-pngm9 1/1 Running 0 17m kube-scheduler-minikube 1/1 Running 0 16m kubernetes-dashboard-7b8ddcb5d6-xt5nt 1/1 Running 0 76s storage-provisioner 1/1 Running 0 17m ^C$ # dashboard url https://2886795294-30000-kitek05.environments.katacoda.com/ # how to launch a Single Node Kubernetes cluster. }}} == Init master == {{{#!highlight bash master $ kubeadm init --kubernetes-version $(kubeadm version -o short) [init] Using Kubernetes version: v1.14.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.69] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 16.503433 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system"Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: xfvno5.q2xfb2m3nw7grdjm [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approveCSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.17.0.69:6443 --token xfvno5.q2xfb2m3nw7grdjm \ --discovery-token-ca-cert-hash sha256:26d11c038d236967630d401747f210af9e3679fb1638e8b599a2da4cb98ab159 }}} {{{ master $ mkdir -p $HOME/.kube master $ pwd /root master $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config master $ sudo chown $(id -u):$(id -g) $HOME/.kube/config master $ export KUBECONFIG=$HOME/.kube/config master $ echo $KUBECONFIG/root/.kube/config }}} == deploy cni weaveworks == Container Network Interface (CNI) defines how the different nodes and their workloads should communicate. Weave Net provides a network to connect all pods together, implementing the Kubernetes model. Kubernetes uses the Container Network Interface (CNI) to join pods onto Weave Net. {{{ master $ kubectl apply -f /opt/weave-kube serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.extensions/weave-net created master $ kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-fb8b8dccf-b9rd7 1/1 Running 0 11m coredns-fb8b8dccf-sfgbn 1/1 Running 0 11m etcd-master 1/1 Running 0 10m kube-apiserver-master 1/1 Running 0 10m kube-controller-manager-master 1/1 Running 0 10m kube-proxy-l42wp 1/1 Running 0 11m kube-scheduler-master 1/1 Running 1 10m weave-net-mcxml 2/2 Running 0 84s }}} == join cluster == {{{#!highlight bash master $ kubeadm token list # check tokens TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS xfvno5.q2xfb2m3nw7grdjm 23h 2019-07-28T16:19:18Z authentication,signing The default bootstrap token generated b y 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token }}} {{{#!highlight bash # in node01 # join cluster kubeadm join --discovery-token-unsafe-skip-ca-verification --token=xfvno5.q2xfb2m3nw7grdjm 172.17.0.69:6443 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. # The --discovery-token-unsafe-skip-ca-verification tag is used to bypass the Discovery Token verification. # in master master $ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 17m v1.14.0 node01 Ready <none> 107s v1.14.0 bootstrap token generated b master $ }}} |
kubernetes
- minikube version # check version 1.2.0
- minikube start
1 $ minikube version
2 minikube version: v1.2.0
3 $ minikube start
4 * minikube v1.2.0 on linux (amd64)
5 * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
6 * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5
7 - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
8 * Pulling images ...
9 * Launching Kubernetes ...
10
11 * Configuring local host environment ...
12 * Verifying: apiserver proxy etcd scheduler controller dns
13 * Done! kubectl is now configured to use "minikube"
cluster details and health status
get cluster nodes
deploy containers
# deploy container $ kubectl create deployment first-deployment --image=katacoda/docker-http-server deployment.apps/first-deployment created $ # deploy container in cluster # check pods $ kubectl get pods NAME READY STATUS RESTARTS AGE first-deployment-8cbf74484-s2fkl 1/1 Running 0 25s # expose deployment $ kubectl expose deployment first-deployment --port=80 --type=NodePort service/first-deployment exposed $ kubectl get svc first-deployment NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE first-deployment NodePort 10.98.246.87 <none> 80:31219/TCP 105s # do request to port 80 in cluster ip $ curl 10.98.246.87:80 <h1>This request was processed by host: first-deployment-8cbf74484-s2fkl</h1> $curl host01:31219 <h1>This request was processed by host: first-deployment-8cbf74484-s2fkl</h1>
dashboard
$ minikube addons enable dashboard #The Kubernetes dashboard allows you to view your applications in a UI. * dashboard was successfully enabled $ kubectl apply -f /opt/kubernetes-dashboard.yaml # only in katacoda service/kubernetes-dashboard-katacoda created # check progress $ kubectl get pods -n kube-system -w #check progress NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-b2kxm 1/1 Running 0 17m coredns-5c98db65d4-mm567 1/1 Running 1 17m etcd-minikube 1/1 Running 0 16m kube-addon-manager-minikube 1/1 Running 0 16m kube-apiserver-minikube 1/1 Running 0 16m kube-controller-manager-minikube 1/1 Running 0 16m kube-proxy-pngm9 1/1 Running 0 17m kube-scheduler-minikube 1/1 Running 0 16m kubernetes-dashboard-7b8ddcb5d6-xt5nt 1/1 Running 0 76s storage-provisioner 1/1 Running 0 17m ^C$ # dashboard url https://2886795294-30000-kitek05.environments.katacoda.com/ # how to launch a Single Node Kubernetes cluster.
Init master
1 master $ kubeadm init --kubernetes-version $(kubeadm version -o short)
2 [init] Using Kubernetes version: v1.14.0
3 [preflight] Running pre-flight checks
4 [preflight] Pulling images required for setting up a Kubernetes cluster
5 [preflight] This might take a minute or two, depending on the speed of your internet connection
6 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
7 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
8 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
9 [kubelet-start] Activating the kubelet service
10 [certs] Using certificateDir folder "/etc/kubernetes/pki"
11 [certs] Generating "ca" certificate and key
12 [certs] Generating "apiserver" certificate and key
13 [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.69]
14 [certs] Generating "apiserver-kubelet-client" certificate and key
15 [certs] Generating "front-proxy-ca" certificate and key
16 [certs] Generating "front-proxy-client" certificate and key
17 [certs] Generating "etcd/ca" certificate and key
18 [certs] Generating "etcd/healthcheck-client" certificate and key
19 [certs] Generating "apiserver-etcd-client" certificate and key
20 [certs] Generating "etcd/server" certificate and key
21 [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1]
22 [certs] Generating "etcd/peer" certificate and key
23 [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1]
24 [certs] Generating "sa" key and public key
25 [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
26 [kubeconfig] Writing "admin.conf" kubeconfig file
27 [kubeconfig] Writing "kubelet.conf" kubeconfig file
28 [kubeconfig] Writing "controller-manager.conf" kubeconfig file
29 [kubeconfig] Writing "scheduler.conf" kubeconfig file
30 [control-plane] Using manifest folder "/etc/kubernetes/manifests"
31 [control-plane] Creating static Pod manifest for "kube-apiserver"
32 [control-plane] Creating static Pod manifest for "kube-controller-manager"
33 [control-plane] Creating static Pod manifest for "kube-scheduler"
34 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
35 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
36 [apiclient] All control plane components are healthy after 16.503433 seconds
37 [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system"Namespace
38 [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
39 [upload-certs] Skipping phase. Please see --experimental-upload-certs
40 [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
41 [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
42 [bootstrap-token] Using token: xfvno5.q2xfb2m3nw7grdjm
43 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
44 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
45 [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approveCSRs from a Node Bootstrap Token
46 [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
47 [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
48 [addons] Applied essential addon: CoreDNS
49 [addons] Applied essential addon: kube-proxy
50
51 Your Kubernetes control-plane has initialized successfully!
52
53 To start using your cluster, you need to run the following as a regular user:
54
55 mkdir -p $HOME/.kube
56 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
57 sudo chown $(id -u):$(id -g) $HOME/.kube/config
58
59 You should now deploy a pod network to the cluster.
60 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
61 https://kubernetes.io/docs/concepts/cluster-administration/addons/
62
63 Then you can join any number of worker nodes by running the following on each as root:
64
65 kubeadm join 172.17.0.69:6443 --token xfvno5.q2xfb2m3nw7grdjm \
66 --discovery-token-ca-cert-hash sha256:26d11c038d236967630d401747f210af9e3679fb1638e8b599a2da4cb98ab159
master $ mkdir -p $HOME/.kube master $ pwd /root master $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config master $ sudo chown $(id -u):$(id -g) $HOME/.kube/config master $ export KUBECONFIG=$HOME/.kube/config master $ echo $KUBECONFIG/root/.kube/config
deploy cni weaveworks
Container Network Interface (CNI) defines how the different nodes and their workloads should communicate. Weave Net provides a network to connect all pods together, implementing the Kubernetes model. Kubernetes uses the Container Network Interface (CNI) to join pods onto Weave Net.
master $ kubectl apply -f /opt/weave-kube serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.extensions/weave-net created master $ kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-fb8b8dccf-b9rd7 1/1 Running 0 11m coredns-fb8b8dccf-sfgbn 1/1 Running 0 11m etcd-master 1/1 Running 0 10m kube-apiserver-master 1/1 Running 0 10m kube-controller-manager-master 1/1 Running 0 10m kube-proxy-l42wp 1/1 Running 0 11m kube-scheduler-master 1/1 Running 1 10m weave-net-mcxml 2/2 Running 0 84s
join cluster
1 # in node01
2 # join cluster
3 kubeadm join --discovery-token-unsafe-skip-ca-verification --token=xfvno5.q2xfb2m3nw7grdjm 172.17.0.69:6443
4 [preflight] Running pre-flight checks
5 [preflight] Reading configuration from the cluster...
6 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
7 [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
8 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
9 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
10 [kubelet-start] Activating the kubelet service
11 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
12
13 This node has joined the cluster:
14 * Certificate signing request was sent to apiserver and a response was received.
15 * The Kubelet was informed of the new secure connection details.
16
17 Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
18 # The --discovery-token-unsafe-skip-ca-verification tag is used to bypass the Discovery Token verification.
19
20 # in master
21 master $ kubectl get nodes
22 NAME STATUS ROLES AGE VERSION
23 master Ready master 17m v1.14.0
24 node01 Ready <none> 107s v1.14.0 bootstrap token generated b
25 master $