Kubernetes

Install

Ubuntu:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
  sudo apt-get update -q && \
  sudo apt-get install -qy kubelet kubeadm kubectl
  apt-mark hold kubelet kubeadm kubectl
  #apt-mark hold kubelet kubeadm kubectl docker-ce

Hint: “docker” has previously to be installed!

See also on https://kubernetes.io/docs/setup/independent/install-kubeadm/

SLES12:

Available within docker-ee!

or

https://software.opensuse.org/download.html?project=Virtualization%3Acontainers&package=kubernetes

zypper addrepo https://download.opensuse.org/repositories/Virtualization:containers/SLE_12_SP3/Virtualization:containers.repo
zypper refresh
zypper install kubernetes

Hint: Not officially supported and “zypper install kubernetes” doesn´t work, but it´s possible to install it manually:

zypper in kubernetes-kubelet
zypper in kubernetes-kubeadm
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/bin/.

systemctl is-enabled kubelet.service
systemctl enable kubelet.service

cgroup vs. systemd driver

docker

Changing the settings such that your container runtime and kubelet use systemd as the cgroup driver stabilized the system! Please note the native.cgroupdriver=systemd setup below:

## Install Docker CE.
apt-get update && apt-get install docker-ce=18.06.2~ce~3-0~ubuntu

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

# Restart docker.

mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker

Check:

docker info |grep -i cgroup

k8s

Check:

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep -i Environment
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/default/kubelet
cat /var/lib/kubelet/config.yaml |grep -i cgroupDriver

Change to:

cgroupDriver: systemd

if not already set.

Check also:

/var/lib/kubelet/kubeadm-flags.env

and

/var/lib/kubelet/config.yaml

Check after modification:

systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet.service
systemctl status kubelet.service | grep "\--cgroup-driver=systemd"

master

Run only on master:

kubeadm config images pull                                                                                              #pulling images before setup k8s
kubeadm init --apiserver-advertise-address=192.168.10.5 --pod-network-cidr=192.168.0.0/16                               #if there are several nic´s you have too choose the management nic/ ip
kubeadm init --pod-network-cidr=192.168.0.0/16                                                                          #set pod-network-cidr

Hint: If you are running your system behind a proxy, you have to make an proxy-exclude (“/etc/environment”):

no_proxy="localhost,127.0.0.1,IP-Master-Node,IP-Worker-Node,IP_Master-Node-Network,10.96.0.0/12,192.168.0.0,::1"

To start using your cluster, you need to run the following as a regular user with sudo rights:

useradd -s /bin/bash -m kubernetes                                       
su - kubernetes
#rm -r $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check:

kubectl get pods -o wide --all-namespaces
kubectl get pods --all-namespaces -o wide -w
kubectl get pods --all-namespaces

Calico

https://docs.projectcalico.org/v3.10/reference/node/configuration

You have to deploy a pod network to the cluster. A pod network add-on is required that your pods can communicate with each other!

kubectl apply -f [podnetwork].yaml

Pod network add-on´s:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Examples “calico”:

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

Check also https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

Important:

Replace

            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"

to

            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: IP_AUTODETECTION_METHOD
              value: "interface=ens18"

in

calico.yaml

Download:

curl https://docs.projectcalico.org/v3.10/manifests/calico.yaml -O

Interface has to be set to (“ens18” in this example)!

Script to change “calico-v3.8.5.yaml”:

set-interface.sh
#!/bin/bash
 
#author:  Thomas Roehm
#version: 1.1
 
INTERFACE="ens18"
#CALIVERS="calico-v3.8.5.yaml"
 
echo ""
echo "Overview calico versions:"
echo ""
ls -al /home/kubernetes/calico
 
echo ""
read -p "Please enter the calico version you want to patch to (e. g. \"calico-v3.8.5.yaml\"): " CALIVERS
echo "Version: \"$CALIVERS\" will be modified!"
echo ""
 
grep -R 'value: "interface' ${CALIVERS}
#grep -R 'value: "interface' calico-v3.8.5.yaml
IFACESET=$(echo $?)
 
if [   ${IFACESET} = 0 ]
then
                echo "Interface already set - nothing todo"
else
                sed -i 's/value: "autodetect"/value: "autodetect"\n            - name: IP_AUTODETECTION_METHOD\n              value: "interface='${INTERFACE}'"/g' ${CALIVERS}
                echo "Interface set to \"${INTERFACE}\""
fi

Dashboard

Install:

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
kubectl apply -f kubernetes-dashboard.yaml

Note: Check also https://github.com/kubernetes/dashboard/releases

Delete:

kubectl -n kube-system delete deployment kubernetes-dashboard                                                     # < v2.0.0         
kubectl -n kubernetes-dashboard delete deployment kubernetes-dashboard                                            # > v2.0.0 as namespace of dashboard has changed
kubectl -n kubernetes-dashboard delete $(kubectl -n kubernetes-dashboard get pod -o name | grep dashboard)

Edit:

kubectl edit deployment kubernetes-dashboard -n kube-system                            # < v2.0.0
kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboard                   # > v2.0.0 as namespace of dashboard has changed

Show config:

kubectl describe pods -n kube-system kubernetes-dashboard                              # < v2.0.0                              
kubectl describe pods -n kubernetes-dashboard kubernetes-dashboard                     # > v2.0.0 as namespace of dashboard has changed

To change login “token-ttl”, edit

    spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          - --token-ttl=28800

to the value you prefer (default 900 sec). If “token-ttl” is not available, add the argument.

Check also on https://github.com/kubernetes/dashboard/wiki/Dashboard-arguments

Proxy Access

It´s not recomended for productive use, so usage just for quick access or troubleshooting!

Network access on port 9999 without host restriction. Note: MUST run as kubernetes user (unless you run kubernetes as root)!!:

kubectl proxy --port 9999 --address='192.168.10.5' --accept-hosts="^*$"

Access only on localhost on default port 8001:

kubectl proxy                                                                                                                         

Access-URL:

http://192.168.10.5:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Default access-URL:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

HTTPS Access

A certificate - installed on the client browser - is required to access! Generate it on you kubernetes master and install it on your client.

Certificate (run as kubernetes user):

grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

Further an “ServiceAccount” inside namespace “kube-system” with “ClusterRoleBinding” is required.

Create service account “admin-user”:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF

Create ClusterRoleBinding:

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

Get the Bearer Token, which is required for browser login:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Note: If you run

kubectl get secret -n kube-system $(kubectl get serviceaccount -n kube-system -o yaml |grep admin-user | grep token | awk '{print $3}') -o yaml

you are not getting the bearer token and the token has to be piped to “base64 –decode” to use it for authentication! Example:

echo "38nnbnbfnktopkeknfvvs..lkjkjhrkjhkdknlöxc,x00073" | base64 --decode

With “describe” you are getting the bearer token immediately!

Access URL:

https://<master-ip-or-dns-name>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy                               # < v2.0.0
https://<master-ip-or-dns-name>:<apiserver-port>/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login              # > v2.0.0 as namespace of dashboard has changed

Example:

https://my-k8s:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy                                                            # < v2.0.0
https://my-k8s:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy                                                   # > v2.0.0 as namespace of dashboard has changed

Note: Cluster info to get access information:

kubectl cluster-info

Login with Kubeconfig

Get the configuration file:

kubectl config view --raw

Save content to a file and reference it on login.

Check also on https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

Own Certificate

To avoid having always the same default certificate name (“kubernetes-admin”) to select when accessing the dashboard. If you are running several kubernetes-systems, you may create your own certificates (such as “kubecfg-myhostname.crt”) and clusterrolebindings.

Create the *.csr, *.crt, *.p12 and *.key:

openssl req -out kubecfg-myhostname.csr -new -newkey rsa:4096 -nodes -keyout kubecfg-myhostname.key -subj "/C=DE/ST=BW/L=MyCity/O=MyOrganisation/OU=Datacenter/CN=admin-user/emailAddress=tmade@test.com"
sudo openssl x509 -req -in kubecfg-myhostname.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out kubecfg-myhostname.crt -days 1000 -sha256
openssl pkcs12 -export -clcerts -inkey kubecfg-myhostname.key -in kubecfg-myhostname.crt -out kubecfg-myhostname.p12 -name "kubernetes-client"

Note: The “common name” (CN) must be the same as the account-name!

Check certificate:

openssl x509 -noout -text -in kubecfg-myhostname.crt

Create a service account (who matches the CN):

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF

Create clusterrolebinding for serviceaccount:

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

Create clusterrolebinding for the user (token):

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: admin-user
EOF

Get the bearer token:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Script to add dashboaduser:

add_dashboard_user.sh
#/bin/bash
 
echo "Important: The \"Common Name\" (CN) must be the same as the ServiceAccount name (e. g. tmade)!"
 
#author:  Thomas Roehm
#version: 1.2
 
C="DE"
ST="BW"
L="MyCity"
O="tmade"
OU="Cluster"
CN="tmade"
MAIL="test@test.com"
 
openssl req -out kubecfg-${CN}.csr -new -newkey rsa:4096 -nodes -keyout kubecfg-${CN}.key -subj "/\C=${C}/ST=${ST}/L=${L}/O=${O}/OU=${OU}/CN=${CN}/emailAddress=${MAIL}"
sudo openssl x509 -req -in kubecfg-${CN}.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out kubecfg-${CN}.crt -days 1000 -sha256
openssl pkcs12 -export -clcerts -inkey kubecfg-${CN}.key -in kubecfg-${CN}.crt -out kubecfg-${CN}.p12 -name "kubernetes-client"
 
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ${CN}
  namespace: kube-system
EOF
 
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ${CN}-user-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: ${CN}
EOF
 
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ${CN}-sa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: ${CN}
  namespace: kube-system
EOF

Minicube - Pods on Master

Remove the taints on the master so that you can schedule pods on it (doesn´t work on default):

kubectl taint nodes $(hostname) node-role.kubernetes.io/master-   

Revert:

kubectl taint nodes  $(hostname) node-role.kubernetes.io/master="":NoSchedule
kubectl taint nodes $(hostname) node-role.kubernetes.io/master-                    #only worker
kubectl taint nodes  --all node-role.kubernetes.io/master="":NoSchedule

Check:

kubectl describe nodes $HOSTNAME |grep -i Taints
kubectl describe nodes |grep -i taint                                                                                                      

Worker-Node

Install “docker-ce, kubelet, kubeadm and kubectl”:

https://www.tmade.de/wiki/doku.php?id=docker:kubernetes#install

https://www.tmade.de/wiki/doku.php?id=docker:docker#install

Note: Set proxy settings for master and worker if running behind a proxy (“/etc/environment”)!!

To join the cluster:

useradd -m kubernetes            

Note: sudo rights required!

su - kubernetes
sudo kubeadm join 192.168.10.5:6443 --token abcdefg.vfxyrqvmgmasdfgd --discovery-token-ca-cert-hash sha256:4256123788006008703a33fafc2
sudo kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

Check on master:

kubectl get nodes
kubectl get nodes -o wide
kubectl delete node NODENAME 

Set label (on master):

sudo kubectl label node NODENAME node-role.kubernetes.io/worker-1=worker-1
sudo kubectl label node NODENAME node-role.kubernetes.io/worker-2=worker-2
sudo kubectl label node knode node-role.kubernetes.io/knode-1=knode-1

Delete label (on master):

kubectl label node NODENAME node-role.kubernetes.io/worker-1-

Delete node from cluster:

kubectl get nodes -o wide
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets                           #evacuate pods
kubectl delete nodes NODENAME                                                                       #on master as user on which kubernetes is running
kubeadm reset -f && iptables -F                                                                     #on node as root user
iptables -t nat -F && iptables -t mangle -F && iptables -X                                          #on node as root user      
kubernetes@kmaster:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
kmaster   Ready    master   48d   v1.13.2
knode     Ready    worker   23m   v1.13.2

Note: The token you can get via:

kubeadm token list

Cluster information:

kubectl cluster-info

If no token is listed, run

kubeadm token create --print-join-command

to create a new token and show join command.

To delete an token:

kubeadm token delete TOKEN(ID) 

Service Accounts

kubectl apply -f dashboard-adminuser.yaml
kubectl delete -f dashboard-adminuser.yaml
kubectl create serviceaccount myuser
kubectl create serviceaccount --namespace kube-system test
kubectl get serviceaccounts admin-user --namespace=kube-system -o yaml
kubectl get serviceaccount --all-namespaces
kubectl get serviceaccounts myuser -o yaml
kubectl get secret | grep myuser
kubectl get secret myuser-token-1yvwg -o yaml                                            #the exact name from "myuser-token-abcde" you get via "kubectl get secret | grep myuser"
kubectl delete serviceaccount -n kube-system kubernetes-dashboard                        #namespace=kube-system, username=kubernetes-dashboard

Create service account “admin-user”:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF

Create ClusterRoleBinding:

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

Get the Bearer Token:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Create an “ClusterRoleBinding” account and login without authentication (just for testing purposes!!):

cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
EOF

Note: Just push “skip” on dashboard to login!

Kube* Autocomplete

Setup autocomplete in bash into the current shell and permanentelly (bash-completion package should be installed first):

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc 
source <(kubeadm completion bash)
echo "source <(kubeadm completion bash)" >> ~/.bashrc

Note: This has to be done for each user!

Additional aliases (set in “/etc/bash.bashrc”) doesn´t working after adding the completion!

Solution:

cat << EOF >> ~/.bashrc
# Source global definitions
if [ -f /etc/bash.bashrc ]; then
    . /etc/bash.bashrc
fi
EOF

Reset Cluster

If you wanna reset the whole cluster to the state after a fresh install, just run this on each node:

sudo kubeadm reset -f
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

Delete:

kubectl drain <node-name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node-name>

Single-Node-Cluster

Uninstall

sudo kubeadm reset -f
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
sudo apt-get autoremove

Inside kubernetes-service user:

sudo rm -rf ~/.kube

Config

/var/lib/kubelet/kubeadm-flags.env                        #is auto-generated on kubeadm runtime and should not be edited.

you can add flags in

/etc/default/kubelet

Kubeconfig folder:

/etc/kubernetes

Persistent Volume

Info:

kubectl get persistentvolumes --all-namespaces -o wide
kubectl get persistentvolumeclaims --all-namespaces -o wide
kubectl get storageclasses.storage.k8s.io

PersistentVolume:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: nfs-test1
  labels:
    type: nfs                        # optional
spec:
  storageClassName: local-storage
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:                              # type
    server: 192.168.10.6            # IP NFS-host
    path: /nfs-share                # path

PersistentVolumeClaim:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-test1-claim1
  namespace: default
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Storage class:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Check also on https://kubernetes.io/docs/concepts/storage/storage-classes/

kubectl get storageclass

Namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: <insert-namespace-name-here>
kubectl create namespace NAMESPACE

POD

nginx

Example “nginx”:

kind: Pod
apiVersion: v1
metadata:
  name: nginx-pod
  labels: 
    app: nginx
    namespace: default
spec:
  volumes:
    - name: nfs-test1
      persistentVolumeClaim:
       claimName: nfs-test-claim1
  containers:
    - name: nginx-pod
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: nfs-test1

Service - NodePort:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  namespace: default
  name: nginx-nodeport
spec:
  externalName: nginx-nodeport
  ports:
  - name: http-port-tcp
    port: 80
    targetPort: 80
    nodePort: 30000
    protocol: TCP
  selector: 
    app: nginx
  type: NodePort

squid

kind: Pod
apiVersion: v1
metadata:
  name: squid-test
  labels: 
    app: proxy
    namespace: default
spec:
  volumes:
    - name: nfs-data1
      persistentVolumeClaim:
       claimName: nfs-data1-claim
  containers:
    - name: squid-test
      image: ubuntu-squid:16.04
      command: ["/bin/sh","-ce"]
      #args: ["/usr/local/squid/sbin/squid -z && sleep 10 && /etc/init.d/squid start && echo Squid started || echo Squid could not start, exit && tail -f /dev/null"]
      args: ["/usr/local/squid/sbin/squid -z && sleep 10 && /etc/init.d/squid start && echo Squid started || echo Squid could not start, exit && while true; do sleep 30; done"]
      ports:
        - containerPort: 8080
          name: "proxy-server"
      volumeMounts:
        - mountPath: "/data"
          name: nfs-data1
kind: Service
apiVersion: v1
metadata:
  labels:
    app: proxy
  namespace: default
  name: proxy-nodeport
spec:
  externalName: proxy-nodeport
  ports:
  - name: proxy-port-tcp
    port: 8080
    targetPort: 8080
	nodePort: 30000
    protocol: TCP
  selector: 
    app: proxy
  type: NodePort

Deployment

kind: Deployment
apiVersion: apps/v1
metadata:
  name: proxy-deployment
  namespace: default
  labels:
    app: proxy
    namespace: default
spec:
  replicas: 4
  selector:
    matchLabels:
      app: proxy
  template:
    metadata:
      labels:
        app: proxy
    spec:
      containers:
        - name: proxy
          image: 'ubuntu16-squid:16.04'
          command:
            - /bin/sh
            - '-ce'
          args:
            - log-level=DEBUG
            - >-
              /etc/init.d/squid start && echo Squid started || echo Squid could not start, exit && tail -f /dev/null
          ports:
            - containerPort: 8080
              protocol: TCP
      restartPolicy: Always

Nodeport, LB, Ingress

Commands

kubeadm init --pod-network-cidr 10.244.0.0/16 
kubectl get nodes -o wide                                         #show cluster, role and node status
kubectl get namespaces
kubectl describe nodes node1
kubectl delete nodes NODENAME
kubectl delete pods calico-node-w6qz4 -n kube-system
kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -         #delete failed or evicted pods
kubectl get pods -o wide --all-namespaces
kubectl get pods -o wide --all-namespaces --show-labels
kubectl get pods -A -o wide
time kubectl get pods -A
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |sort
kubectl get pods --namespace kube-system
kubectl delete pods <pod_name> --grace-period=0 --force -n <namespace>
kubectl delete --all pods --namespace <namespace>
kubectl get pods -n <namespace> | grep "searchstring-or-status" | awk '{print $1}' | xargs kubectl --namespace=<namespace> delete pod --grace-period=0 -o name
kubectl describe pods --namespace kube-system kubernetes-dashboard
kubectl describe pods -n kube-system kubernetes-dashboard
kubectl cluster-info
kubectl cluster-info dump
kubectl cordon nodename                                                                     #mark the node as unschedulable. This ensures that no new pods will get scheduled while you are preparing it for removal or maintenance.
kubectl uncordon nodename                                                                   #allow scheduling on the node again
kubectl version
kubectl version | base64 | tr -d '\n'
kubectl get pod -o wide
kubectl edit pods --namespace=kube-system kubernetes-dashboard-57df4db6b-4h9pc
kubectl exec -it --namespace=test01 ubuntu -- /bin/bash
kubectl exec -it --namespace=default squid-proxy -- /bin/bash
kubectl exec squid-proxy -- ps -ef                                                          #execute command "ps -ef" and output to stdout
kubectl get jobs --all-namespaces
kubectl get cronjobs --all-namespaces
kubectl get deployments --all-namespaces -o wide                                            #pendant "kubectl get deploy --all-namespaces"
kubectl --namespace kube-system delete deployment kubernetes-dashboard
kubectl get services --all-namespaces
kubectl describe pod calico-node-s7ch5 -n kube-system
kubectl describe service --all-namespaces | grep -i nodeport                                #nodeport
kubectl get node -o=jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'
kubectl replace -f file.yaml
kubectl replace --force -f file.yaml
kubectl apply-f file.yaml 
kubectl delete -f file.yaml
kubectl autoscale deployment foo --min=2 --max=10

Logging:

kubectl get events
kubectl get events -n default
kubectl delete events --all
kubectl logs -n kube-system -p calico-node-xxxxx -c calico-node
kubectl logs calico-node-s7ch5 -n kube-system -c calico-node
sudo journalctl -xeu kubelet
sudo journalctl -xeuf kubelet

Alias

echo "alias kg='kubectl get'" >> /etc/bash.bashrc

DNS

kubectl get ep kube-dns -n kube-system -o wide
kubectl get svc -n kube-system -o wide | grep dns
kubectl get svc -n kube-system -o wide
kubectl get configmap -n kube-system coredns -oyaml

https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

Certificate

Renew all certificates:

sudo kubeadm alpha certs renew all

Patching

Order:

  • Patch master (patch-k8s-master.sh on master)
  • Prepare patching worker (prepare-k8s-worker.sh on master)
  • Patch worker (patch-k8s-worker.sh on worker)

To patch a cluster, you can run the following scripts (working for k8s >= v1.15.x).

Patch master:

patch-k8s-master.sh
#!/bin/bash
 
#author:  Thomas Roehm
#version: 1.1
 
echo "You´re running version:"
echo ""
su - kubernetes -c "kubectl version"
echo ""
read -s -n 1 -p \"'Press any key to continue . . .'\"
 
apt-get update
apt-cache policy kubeadm
 
echo ""
read -p "Please enter k8s version you want to patch to (e. g. \"1.16.2-00\"): " VERSION
echo "Version: \"$VERSION\" will be updated!"
echo ""
 
apt-mark unhold kubernetes-cni kubeadm kubelet kubectl docker-ce
apt-get update && apt-get install -y kubeadm=${VERSION}
 
echo ""
#echo "drain node $(hostname -s)"
#su - kubernetes -c "kubectl drain $(hostname -s) --ignore-daemonsets"
echo ""
 
APPLYVERSION="v$(echo ${VERSION} | cut -d "-" -f1)"
echo ""
echo "version $APPLYVERSION will be applied"
echo ""
read -s -n 1 -p \"'Press any key to continue . . .'\"
kubeadm upgrade plan
echo ""
read -s -n 1 -p \"'Press any key to continue . . .'\"
kubeadm upgrade apply ${APPLYVERSION}
 
#apt-cache policy docker-ce
#echo ""
#read -p "Please enter docker-ce-version you want to patch to (e. g. \"5:18.09.9~3-0~ubuntu-xenial"): " DVERSION
#echo "Version: \"$iDVERSION\" will be updated!"
#echo ""
#apt-get install -y docker-ce
echo ""
#echo "uncordon node $(hostname -s)"
echo ""
#su - kubernetes -c "kubectl uncordon $(hostname -s)"
echo "patching kublet,kubectl"
echo ""
read -p "Do you want to proceed? Please enter y or n: " PROCEED
echo ""
echo "You´ve entered:  \"${PROCEED}\""
echo ""
if [ ${PROCEED} = "y" ]
then
        apt-get install -y kubelet=${VERSION} kubectl=${VERSION}
        apt-mark hold kubeadm kubernetes-cni kubelet kubectl docker-ce
        systemctl restart docker.service kubelet.service
        systemctl status docker.service kubelet.service | cat
else
        exit 1
fi

Hint: Please patch always within one version to the latest patchlevel, before you upgrade to the new version.

Example:

Running version: 1.15.3-00
Update to 1.15.6-00
Update to 1.16.X-00

Prepare/ patch worker:

prepare-k8s-worker.sh
#!/bin/bash
 
#author:  Thomas Roehm
#version: 1.1
 
echo "Getting worker:"
echo ""
su - kubernetes -c "kubectl get nodes"
echo ""
read -p "Please enter the name of the worker you want to update: " NODENAME
echo "Worker: \"$NODENAME\" will be patched"
echo ""
su - kubernetes -c "kubectl drain ${NODENAME} --ignore-daemonsets"
#Below version k8s <= v1.15.x run:
#kubeadm upgrade node config --kubelet-version v1.15.x
kubeadm upgrade node
 
#Proceed or cancel
echo ""
read -p "Do you want to wait until ${NODENAME} has been patched to finish (uncordon) the patch-process? Please enter y (wait) or n: " PROCEED
echo "You´ve entered:  \"$PROCEED\""
echo ""
 
if [ $PROCEED = y ]
then
        while read -s -p "Please enter \"p\" to proceed: " p && [[ -z "$p" ]] ;
        do
                echo "Please enter \"p\" to proceed"
        done
                su - kubernetes -c "kubectl uncordon ${NODENAME}"
                echo "Uncordon ${NODENAME}"
                su - kubernetes -c "kubectl get nodes -o wide"
else
        exit 1
fi

Patch worker:

patch-k8s-worker.sh
#!/bin/bash
 
#author:  Thomas Roehm
#version: 1.1
 
echo "You´re running version:"
echo ""
su - kubernetes -c "kubectl version"
echo ""
read -s -n 1 -p "Press any key to continue . . ."
#su - kubernetes -c "read -s -n 1 -p \"Press any key to continue . . .\""
apt-get update
apt-cache policy kubeadm
 
echo ""
read -p "Please enter k8s version you want to patch to (e. g. \"1.16.2-00\"): " VERSION
echo "Version: \"$VERSION\" will be updated!"
echo ""
 
apt-mark unhold kubernetes-cni kubeadm kubelet kubectl docker-ce
apt-get update && apt-get install -y kubeadm=${VERSION} kubelet=${VERSION} kubectl=${VERSION}
systemctl restart docker.service kubelet.service
systemctl status docker.service kubelet.service | cat
apt-mark hold kubeadm kubernetes-cni kubelet kubectl docker-ce
echo ""
echo "worker updated"

Trident:

https://github.com/NetApp/trident/releases

trident-update.sh
#!/bin/bash
 
#author:  Thomas Roehm
#version: 1.0
 
VERSION="19.10.0"
HOME="/home/kubernetes/"
FILE="${HOME}trident-installer-${VERSION}.tar.gz"
 
if  [ -e $FILE ]
then
        echo "${FILE} exists, please check if trident is already up to date. Wrong version referenced in script!?"
        exit 1
else
        echo "patching trident"
        sleep 10
        su - kubernetes -c "wget https://github.com/NetApp/trident/releases/download/v${VERSION}/trident-installer-${VERSION}.tar.gz -P ${HOME}"
        su - kubernetes -c "mv ~/trident-installer ~/trident-installer.old"
        su - kubernetes -c "tar -xzf trident-installer-${VERSION}.tar.gz"
        su - kubernetes -c "mkdir ~/trident-installer/setup"
        su - kubernetes -c "cp -a ~/trident-installer.old/setup/backend.json ~/trident-installer/setup/."
        su - kubernetes -c "~/trident-installer/tridentctl uninstall -n trident"
        su - kubernetes -c "~/trident-installer/tridentctl install -n trident"
fi

Install-Script

Install k8s - install-repositories have to be added previously!

Ubuntu Repository:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
  sudo apt-get update -q && \
  sudo apt-get install -qy kubelet kubeadm kubectl
  apt-mark hold kubelet kubeadm kubectl
  #apt-mark hold kubelet kubeadm kubectl docker-ce

Download calico.yaml and dashboard.yaml and create required folderstructure (check variables).

Install:

install-k8s.sh
#!/bin/bash
 
#author:  Thomas Roehm
#version: 1.2
 
USER="kubernetes"
HOME="/home/${USER}"
CALICO="/home/kubernetes/calico"
CALICOVERS="v3.10.2"
KUBEHOME="${HOME}/.kube"
#CIDR="10.0.0.5"
DASBOARD="/home/kubernetes/dashboard"
DASHVERS="v2.0.0-beta8"
PODNETWORKADDON="192.168.0.0/16"
 
echo ""
echo "Setup -------------k8s--------------"
echo ""
su - kubernetes -c "kubectl version"
echo ""
su - kubernetes -c "read -s -n 1 -p \"Press any key to continue . . .\""
 
apt-get update
apt-cache policy kubeadm
#apt-cache policy docker-ce
 
echo ""
read -p "Please enter k8s version you want to install (e. g. \"1.16.4-00\"): " VERSION
echo "Version: \"$VERSION\" will be installed!"
apt-mark unhold kubernetes-cni kubeadm kubelet kubectl docker-ce
#apt-mark unhold kubernetes-cni kubeadm kubelet kubectl docker-ce
apt-get install -y kubeadm=${VERSION} kubelet=${VERSION} kubectl=${VERSION}
echo ""
read -p "Please enter your CIDR management ip-adress for your master (e. g. \"10.6.33.10\"): " CIDR
echo ""
echo "ip set to: \"$CIDR\""
echo ""
kubeadm init --apiserver-advertise-address=${CIDR} --pod-network-cidr=${PODNETWORKADDON}
echo ""
read -s -n 1 -p "Press any key to continue . . ."
echo ""
if  [ -e ${KUBEHOME} ]
then
        echo "\"${KUBEHOME}\" exists"
        read -p "Do you want to delete \"${KUBEHOME}\"? Please enter y (proceed) or n (stop): " PROCEED
        echo "You´ve entered:  \"$PROCEED\""
        echo ""
        if [ $PROCEED = y ]
        then
                rm -r ${KUBEHOME}
                echo "\"${KUBEHOME}\" deleted!"
                echo ""
                read -s -n 1 -p "Press any key to continue . . ."
        else
        exit 1
        fi
fi
su - ${USER} -c "mkdir -p $HOME/.kube"
su - ${USER} -c "sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
chown -R ${USER}:${USER} $HOME/.kube
echo ""
echo "home \"$HOME/.kube\" copied!"
echo ""
read -s -n 1 -p "Press any key to continue . . ."
#calico pod-network-addon
#su - kubernetes -c "kubectl apply -f /home/kubernetes/calico/${CALICOVERS}/rbac-kdd.yaml"
#su - kubernetes -c "kubectl apply -f /home/kubernetes/calico/${CALICOVERS}/calico.yaml"
su - kubernetes -c "kubectl apply -f ${CALICO}/calico-${CALICOVERS}.yaml"
echo ""
echo "calico pod network add on has been deployed"
echo ""
read -s -n 1 -p "Press any key to continue . . ."
#install dashboard
su - kubernetes -c "kubectl apply -f ${DASBOARD}/kubernetes-dashboard-${DASHVERS}.yaml"
echo ""
echo "dashboard has been deployed"
echo ""
read -s -n 1 -p "Press any key to continue . . ."
 
apt-mark hold kubernetes-cni kubeadm kubelet kubectl docker-ce
 
echo ""
echo "Status - please press \"ctrl + c\" when all pods are running"
echo ""
 
watch kubectl get pods -A -o wide

Reset k8s:

reset-k8s.sh
#!/bin/bash
 
#author:  Thomas Roehm
#version: 1.1
 
HOME="/home/kubernetes"
 
sudo kubeadm reset -f
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
rm -r ${HOME}/.kube

helm

docker/kubernetes.txt · Last modified: 2020/02/19 15:12 by tmade
 
Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 4.0 International
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki