MoinMoin Logo
  • Comments
  • Immutable Page
  • Menu
    • Navigation
    • RecentChanges
    • FindPage
    • Local Site Map
    • Help
    • HelpContents
    • HelpOnMoinWikiSyntax
    • Display
    • Attachments
    • Info
    • Raw Text
    • Print View
    • Edit
    • Load
    • Save
  • Login

Navigation

  • Start
  • Sitemap

Upload page content

You can upload content for the page named below. If you change the page name, you can also upload content for another page. If the page name is empty, we derive the page name from the file name.

File to load page content from
Page name
Comment

Revision 24 as of 2019-07-27 17:45:08
  • kubernetes

kubernetes

  • https://www.katacoda.com/courses/kubernetes

  • minikube version # check version 1.2.0
  • minikube start

   1 $ minikube version
   2 minikube version: v1.2.0
   3 $ minikube start
   4 * minikube v1.2.0 on linux (amd64)
   5 * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
   6 * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5
   7   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
   8 * Pulling images ...
   9 * Launching Kubernetes ...
  10 
  11 * Configuring local host environment ...
  12 * Verifying: apiserver proxy etcd scheduler controller dns
  13 * Done! kubectl is now configured to use "minikube"

cluster details and health status

   1 $ kubectl cluster-info
   2 Kubernetes master is running at https://172.17.0.30:8443
   3 KubeDNS is running at https://172.17.0.30:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
   4 
   5 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

get cluster nodes

   1 $ kubectl get nodes
   2 NAME       STATUS   ROLES    AGE    VERSION
   3 minikube   Ready    master   3m1s   v1.15.0

deploy containers

   1 # deploy container
   2 $ kubectl create deployment first-deployment --image=katacoda/docker-http-server
   3 deployment.apps/first-deployment created
   4 $ # deploy container in cluster
   5 # check pods
   6 $ kubectl get pods
   7 NAME                               READY   STATUS    RESTARTS   AGE
   8 first-deployment-8cbf74484-s2fkl   1/1     Running   0          25s
   9 # expose deployment
  10 $ kubectl expose deployment first-deployment --port=80 --type=NodePort
  11 service/first-deployment exposed
  12 
  13 $ kubectl get svc first-deployment
  14 NAME               TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
  15 first-deployment   NodePort   10.98.246.87   <none>        80:31219/TCP   105s
  16 # do request to port 80 in cluster ip
  17 $ curl 10.98.246.87:80
  18 <h1>This request was processed by host: first-deployment-8cbf74484-s2fkl</h1>
  19 
  20 $curl host01:31219
  21 <h1>This request was processed by host: first-deployment-8cbf74484-s2fkl</h1>

dashboard

   1 $ minikube addons enable dashboard 
   2 #The Kubernetes dashboard allows you to view your applications
   3 in a UI.
   4 * dashboard was successfully enabled
   5 $ kubectl apply -f /opt/kubernetes-dashboard.yaml 
   6 # only in katacoda
   7 service/kubernetes-dashboard-katacoda created
   8 
   9 # check progress
  10 $ kubectl get pods -n kube-system -w  #check progress
  11 NAME                                    READY   STATUS    RESTARTS   AGE
  12 coredns-5c98db65d4-b2kxm                1/1     Running   0          17m
  13 coredns-5c98db65d4-mm567                1/1     Running   1          17m
  14 etcd-minikube                           1/1     Running   0          16m
  15 kube-addon-manager-minikube             1/1     Running   0          16m
  16 kube-apiserver-minikube                 1/1     Running   0          16m
  17 kube-controller-manager-minikube        1/1     Running   0          16m
  18 kube-proxy-pngm9                        1/1     Running   0          17m
  19 kube-scheduler-minikube                 1/1     Running   0          16m
  20 kubernetes-dashboard-7b8ddcb5d6-xt5nt   1/1     Running   0          76s
  21 storage-provisioner                     1/1     Running   0          17m
  22 
  23 ^C$
  24 # dashboard url https://2886795294-30000-kitek05.environments.katacoda.com/
  25 # how to launch a Single Node Kubernetes cluster. 
  26 

Init master

   1 master $ kubeadm init --kubernetes-version $(kubeadm version -o short)
   2 [init] Using Kubernetes version: v1.14.0
   3 [preflight] Running pre-flight checks
   4 [preflight] Pulling images required for setting up a Kubernetes cluster
   5 [preflight] This might take a minute or two, depending on the speed of your internet connection
   6 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
   7 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
   8 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
   9 [kubelet-start] Activating the kubelet service
  10 [certs] Using certificateDir folder "/etc/kubernetes/pki"
  11 [certs] Generating "ca" certificate and key
  12 [certs] Generating "apiserver" certificate and key
  13 [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.69]
  14 [certs] Generating "apiserver-kubelet-client" certificate and key
  15 [certs] Generating "front-proxy-ca" certificate and key
  16 [certs] Generating "front-proxy-client" certificate and key
  17 [certs] Generating "etcd/ca" certificate and key
  18 [certs] Generating "etcd/healthcheck-client" certificate and key
  19 [certs] Generating "apiserver-etcd-client" certificate and key
  20 [certs] Generating "etcd/server" certificate and key
  21 [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1]
  22 [certs] Generating "etcd/peer" certificate and key
  23 [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.17.0.69 127.0.0.1 ::1]
  24 [certs] Generating "sa" key and public key
  25 [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  26 [kubeconfig] Writing "admin.conf" kubeconfig file
  27 [kubeconfig] Writing "kubelet.conf" kubeconfig file
  28 [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  29 [kubeconfig] Writing "scheduler.conf" kubeconfig file
  30 [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  31 [control-plane] Creating static Pod manifest for "kube-apiserver"
  32 [control-plane] Creating static Pod manifest for "kube-controller-manager"
  33 [control-plane] Creating static Pod manifest for "kube-scheduler"
  34 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  35 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  36 [apiclient] All control plane components are healthy after 16.503433 seconds
  37 [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system"Namespace
  38 [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
  39 [upload-certs] Skipping phase. Please see --experimental-upload-certs
  40 [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
  41 [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  42 [bootstrap-token] Using token: xfvno5.q2xfb2m3nw7grdjm
  43 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  44 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  45 [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approveCSRs from a Node Bootstrap Token
  46 [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  47 [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
  48 [addons] Applied essential addon: CoreDNS
  49 [addons] Applied essential addon: kube-proxy
  50 
  51 Your Kubernetes control-plane has initialized successfully!
  52 
  53 To start using your cluster, you need to run the following as a regular user:
  54 
  55   mkdir -p $HOME/.kube
  56   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  57   sudo chown $(id -u):$(id -g) $HOME/.kube/config
  58 
  59 You should now deploy a pod network to the cluster.
  60 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  61   https://kubernetes.io/docs/concepts/cluster-administration/addons/
  62 
  63 Then you can join any number of worker nodes by running the following on each as root:
  64 
  65 kubeadm join 172.17.0.69:6443 --token xfvno5.q2xfb2m3nw7grdjm \
  66     --discovery-token-ca-cert-hash sha256:26d11c038d236967630d401747f210af9e3679fb1638e8b599a2da4cb98ab159

   1 master $ mkdir -p $HOME/.kube
   2 master $ pwd
   3 /root
   4 master $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   5 master $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
   6 master $ export KUBECONFIG=$HOME/.kube/config
   7 master $ echo $KUBECONFIG/root/.kube/config

deploy cni weaveworks - deploy a pod network to the cluster

Container Network Interface (CNI) defines how the different nodes and their workloads should communicate. Weave Net provides a network to connect all pods together, implementing the Kubernetes model. Kubernetes uses the Container Network Interface (CNI) to join pods onto Weave Net.

   1 master $ kubectl apply -f /opt/weave-kube
   2 serviceaccount/weave-net created
   3 clusterrole.rbac.authorization.k8s.io/weave-net created
   4 clusterrolebinding.rbac.authorization.k8s.io/weave-net created
   5 role.rbac.authorization.k8s.io/weave-net created
   6 rolebinding.rbac.authorization.k8s.io/weave-net created
   7 daemonset.extensions/weave-net created
   8 
   9 master $ kubectl get pod -n kube-system
  10 NAME                             READY   STATUS    RESTARTS   AGE
  11 coredns-fb8b8dccf-b9rd7          1/1     Running   0          11m
  12 coredns-fb8b8dccf-sfgbn          1/1     Running   0          11m
  13 etcd-master                      1/1     Running   0          10m
  14 kube-apiserver-master            1/1     Running   0          10m
  15 kube-controller-manager-master   1/1     Running   0          10m
  16 kube-proxy-l42wp                 1/1     Running   0          11m
  17 kube-scheduler-master            1/1     Running   1          10m
  18 weave-net-mcxml                  2/2     Running   0          84s

join cluster

   1 master $ kubeadm token list # check tokens
   2 TOKEN                     TTL       EXPIRES                USAGES                   DESCRIPTION
   3                     EXTRA GROUPS
   4 xfvno5.q2xfb2m3nw7grdjm   23h       2019-07-28T16:19:18Z   authentication,signing   The default bootstrap token generated b
   5 y 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

   1 # in node01
   2 # join cluster
   3 kubeadm join --discovery-token-unsafe-skip-ca-verification --token=xfvno5.q2xfb2m3nw7grdjm 172.17.0.69:6443
   4 [preflight] Running pre-flight checks
   5 [preflight] Reading configuration from the cluster...
   6 [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
   7 [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
   8 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
   9 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10 [kubelet-start] Activating the kubelet service
  11 [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  12 
  13 This node has joined the cluster:
  14 * Certificate signing request was sent to apiserver and a response was received.
  15 * The Kubelet was informed of the new secure connection details.
  16 
  17 Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  18 # The --discovery-token-unsafe-skip-ca-verification tag is used to bypass the Discovery Token verification. 
  19 
  20 # in master
  21 master $ kubectl get nodes
  22 NAME     STATUS   ROLES    AGE    VERSION
  23 master   Ready    master   17m    v1.14.0
  24 node01   Ready    <none>   107s   v1.14.0                                                       bootstrap token generated b
  25 master $
  26 
  27 # in node01
  28 node01 $ kubectl get nodesThe connection to the server localhost:8080 was refused - did you specify the right host or port
  29 ?
  30 node01 $

deploy container in cluster

   1 master $ kubectl create deployment http --image=katacoda/docker-http-server:latest
   2 deployment.apps/http created
   3 master $ kubectl get pods
   4 NAME                    READY   STATUS    RESTARTS   AGE
   5 http-7f8cbdf584-74pd9   1/1     Running   0          11s
   6 
   7 master $ docker ps | grep http-server
   8 master $
   9 
  10 node01 $ docker ps | grep http-serveradb3cde7f861        katacoda/docker-http-server   "/app"                   About a minute ago
  11 Up About a minute                       k8s_docker-http-server_http-7f8cbdf584-74pd9_default_04a
  12 17065-b08d-11e9-bff1-0242ac110045_0
  13 
  14 # expose deployment
  15 master $ kubectl get pods
  16 NAME                    READY   STATUS    RESTARTS   AGE
  17 http-7f8cbdf584-74pd9   1/1     Running   0          17m                                        bootstrap token generated b
  18 master $ kubectl expose deployment http  --port=80 --type=NodePort
  19 service/http exposed
  20 
  21 master $ kubectl get service http
  22 NAME   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
  23 http   NodePort   10.101.65.149   <none>        80:30982/TCP   49s
  24 
  25 master $ curl 10.101.65.149:80
  26 <h1>This request was processed by host: http-7f8cbdf584-74pd9</h1>
  27 
  28 master $ curl http://10.101.65.149
  29 <h1>This request was processed by host: http-7f8cbdf584-74pd9</h1>

apply dashboard in cluster

  • Dashboard General-purpose web UI for Kubernetes clusters Dashboard Version: v1.10.0

master $ kubectl apply -f dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
master $ kubectl get pods -n kube-system
NAME                                    READY   STATUS              RESTARTS   AGEcoredns-fb8b8dccf-b9rd7                 1/1     Running             0          42mcoredns-fb8b8dccf-sfgbn                 1/1     Running             0          42m
etcd-master                             1/1     Running             0          41m
kube-apiserver-master                   1/1     Running             0          40m
kube-controller-manager-master          1/1     Running             0          40m
kube-proxy-gwrps                        1/1     Running             0          26m
kube-proxy-l42wp                        1/1     Running             0          42m
kube-scheduler-master                   1/1     Running             1          40m
kubernetes-dashboard-5f57845f9d-ls7q2   0/1     ContainerCreating   0          2s
weave-net-gww8b                         2/2     Running             0          26m
weave-net-mcxml                         2/2     Running             0          31m

Create service accoun for dashboard

cat <<EOF | kubectl create -f - 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

# Get login token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

When the dashboard was deployed, it used externalIPs to bind the service to port 8443. This makes the dashboard available to outside of the cluster and viewable at https://2886795335-8443-kitek05.environments.katacoda.com/

# Use the admin-user token to access the dashboard.
https://2886795335-8443-kitek05.environments.katacoda.com/#!/login
# sign in using token
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXNzcTl4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2Y2RiNGZmMy1iMDkwLTExZTktYmZmMS0wMjQyYWMxMTAwNDUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.R2OtDYxXaR0Pgluzq1m8FMZflF2tdYtJdG5XhkVC28vf1WkJu-Zo51I5ONUiK2WdBEMPw-N2PW_R9l6lak1clvlxfUSn777nThYSxhmR5pfxi6GmDlFo928KJvWVPDen1jrzAaQOEUZ1maOzPcnjKGpR-CRTgmYDnxZY84rqi68y0vfdn16ER8HeW-wkJ-hfGyUAhryk_ob1CUBjjbs-vefpaLcHLdrWNaKaFi1j5fCc_eJi10FpSTmuBsb04xgN0I17hkTlSw2fyOAj7LtC3pBDrK0nOdHCJkBEtsg89rkvLufYph5AFeoWQVKdW9JZH8BYS91BFla7pZnTwdBVeA

https://2886795335-8443-kitek05.environments.katacoda.com/#!/overview?namespace=default

services

   1 master $ kubectl get service
   2 NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
   3 http         NodePort    10.101.65.149   <none>        80:30982/TCP   17m
   4 kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        56m

Start containers using Kubectl

   1 minikube start # start kubernetes cluster and its components
   2 * minikube v1.2.0 on linux (amd64)
   3 * Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
   4 * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.5
   5   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
   6 * Pulling images ...
   7 * Launching Kubernetes ...
   8 * Configuring local host environment ...
   9 * Verifying: apiserver proxy etcd
  10 
  11  scheduler controller dns
  12 * Done! kubectl is now configured to use "minikube"
  13 
  14 $ kubectl get nodes
  15 NAME       STATUS   ROLES    AGE    VERSION
  16 minikube   Ready    master   2m2s   v1.15.0
  17 
  18 $ kubectl get service
  19 NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
  20 kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2m18s
  21 # This deployment is issued to the Kubernetes master which launches the Pods and containers required. Kubectl run_ is similar to docker run but at a cluster 
  22 level.
  23 
  24 #  launch a deployment called http which will start a container based on the Docker Image katacoda/docker-http-server:latest.
  25 $ kubectl run http --image=katacoda/docker-http-server:latest --replicas=1
  26 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version.Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  27 deployment.apps/http created
  28 $ kubectl get deployments
  29 NAME   READY   UP-TO-DATE   AVAILABLE   AGE
  30 http   1/1     1            1           6s
  31 
  32 # you can describe the deployment process.
  33 kubectl describe deployment http
  34 
  35 # expose the container port 80 on the host 8000 binding to the external-ip of the host.
  36 $ kubectl expose deployment http --external-ip="172.17.0.13" --port=8000 --target-port=80
  37 service/http exposed
  38 
  39 $ curl http://172.17.0.13:8000
  40 <h1>This request was processed by host: http-5fcf9dd9cb-zfkkz</h1>
  41 
  42 $ kubectl get pods
  43 NAME                    READY   STATUS    RESTARTS   AGE
  44 http-5fcf9dd9cb-zfkkz   1/1     Running   0          3m26s
  45 
  46 $ kubectl get service
  47 NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
  48 http         ClusterIP   10.100.157.159   172.17.0.13   8000/TCP   57s
  49 kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    7m41s
  50 
  51 $ curl http://10.100.157.159:8000
  52 <h1>This request was processed by host: http-5fcf9dd9cb-zfkkz</h1>
  53 
  54 $ kubectl run httpexposed --image=katacoda/docker-http-server:latest --replicas=1 --port=80 --host
  55 port=8001
  56 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version.
  57 Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  58 deployment.apps/httpexposed created
  59 $ curl http://172.17.0.13:8001
  60 <h1>This request was processed by host: httpexposed-569df5d86-rzzhb</h1>
  61 $ kubectl get svc
  62 NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
  63 http         ClusterIP   10.100.157.159   172.17.0.13   8000/TCP   3m50s
  64 kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    10m
  65 
  66 $ kubectl get pods
  67 NAME                          READY   STATUS    RESTARTS   AGE
  68 http-5fcf9dd9cb-zfkkz         1/1     Running   0          7m9s
  69 httpexposed-569df5d86-rzzhb   1/1     Running   0          36s
  70 
  71 # Scaling the deployment will request Kubernetes to launch additional Pods.
  72 $ kubectl scale --replicas=3 deployment http
  73 deployment.extensions/http scaled
  74 
  75 $ kubectl get pods # amount of pods for service http increased to 3
  76 NAME                          READY   STATUS    RESTARTS   AGE
  77 http-5fcf9dd9cb-fhljh         1/1     Running   0          31s
  78 http-5fcf9dd9cb-wb2dh         1/1     Running   0          31s
  79 http-5fcf9dd9cb-zfkkz         1/1     Running   0          9m27s
  80 httpexposed-569df5d86-rzzhb   1/1     Running   0          2m54s
  81 
  82 # Once each Pod starts it will be added to the load balancer service.
  83 
  84 $ kubectl get service
  85 NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
  86 http         ClusterIP   10.100.157.159   172.17.0.13   8000/TCP   7m28s
  87 kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP    14m
  88 
  89 $ kubectl describe svc http
  90 Name:              http
  91 Namespace:         defaultLabels:            run=httpAnnotations:       <none>
  92 Selector:          run=httpType:              ClusterIPIP:                10.100.157.159
  93 External IPs:      172.17.0.13
  94 Port:              <unset>  8000/TCP
  95 TargetPort:        80/TCP
  96 Endpoints:         172.18.0.4:80,172.18.0.6:80,172.18.0.7:80
  97 Session Affinity:  None
  98 Events:            <none>
  99 
 100 $ curl http://172.17.0.13:8000
 101 <h1>This request was processed by host: http-5fcf9dd9cb-wb2dh</h1>
 102 $ curl http://172.17.0.13:8000
 103 <h1>This request was processed by host: http-5fcf9dd9cb-fhljh</h1>
 104 $ curl http://172.17.0.13:8000
 105 <h1>This request was processed by host: http-5fcf9dd9cb-zfkkz</h1>
  • MoinMoin Powered
  • Python Powered
  • GPL licensed
  • Valid HTML 4.01