Skip to content


"Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure." -


More terms in the k8s glossary:

cli usage

Learn about kubernetes

kubectl explain roles

Multiple kubeadm configs

The default config is ~/.kube/config, but if you want to use multiple configs you can do this:

export KUBECONFIG="${HOME}/code/kubespray/artifacts/admin.conf:${HOME}/.kube/config"

I have seen weird problems when the order of configs is changed, such as certificate-authority-data and client-certificate-data being missing.


"kubeadm: easily bootstrap a secure Kubernetes cluster." - kubeadm --help

Show your kubeadm tokens

$ sudo kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
ubyc9a.1eq2ihwtnz7c7c9e   23h       2018-05-24T16:19:33-04:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

See sudo kubeadm token -h for more usage.


"kubectl controls the Kubernetes cluster manager." - kubectl --help


  • kubectl get - show all resource types with short-hand versions.

  • kubectl completion -h - show how to configure completion for your shell.
  • kubectl config get-contexts - show which k8s configuration contexts you can control.
  • kubectl config use-context foo - switch to the foo context.
  • kubectl get nodes - show the nodes in the k8s cluster.
  • kubectl get pods - show deployed pods. there can be many pods per deployment.
  • kubectl get pods -n kube-system - show pods in a specific namespace.
  • kubectl get pods,hpa,deployment --all-namespaces - get several resource types at once, from all namespaces
  • kubectl describe pod foo
  • kubectl get deployment
  • kubectl describe deployment foo
  • kubectl get ns - show namespaces.
  • kubectl get pv - show physical volumes.
  • kubectl get svc -n kube-system - show a table of important details about running services in the kube-system namespace.
  • kubectl get pods -o yaml - show the yaml configs for the currently running status of every pod.
  • kubectl explain pods.spec - show documentation about pod specifications.
  • kubectl describe pods/echoserver - describe the pod whose Name is echoserver.
  • kubectl get rs - show replica sets.
  • kubectl expose deployment <deployment_name> --type=NodePort - create a service for the given deployment.
  • kubectl scale deployment <deployment_name> --replicas=5 - scale a deployment to 5 pods.
  • kubectl rollout history deployment <deployment_name>
  • kubectl get cm - get a list of config maps.
  • kubectl get apiservices - get a list of api service endpoints. Show -o yaml to view status about availability, endpoint, etc..

Working with several configs

Sometimes you want to have individual configs, such as when you are using configs that are updated by other engineers and pulled down via git, and sometimes you want to have one monolithic config, such as when you are using a tool that cannot easily work with multiple configs.

Use multiple configs via alias

This is a great method for requiring explicit selection of the environment, which is good for prod.

alias foo-k-prod="export KUBECONFIG=$HOME/.kube/foo-prod-config ; kubectl config set-context foo-prod --namespace=default ;"

See also Google Cloud for more examples like this related to GCP.

Merge several configs

This produces a monolithic file named kube_config which can be moved to ~/.kube/config. It merges the contents of your existing ~/.kube/config file.

export KUBECONFIG="${HOME}/.kube/config"
for X in $(find "$REPO_DIR/kube_config.d" -name '*.config') ; do
kubectl config view --flatten > kube_config
echo "Config file successfully created at ${PWD}/kube_config"
echo "Run: mv -i ${PWD}/kube_config ${HOME}/.kube/config"

Create a KUBECONFIG env var from several config files

This produces a KUBECONFIG that looks like file1:file2:file3

for config in $(find "$REPO_DIR/kube_config.d" -name '*.config') ; do

Show nodes and their taints

kubectl get nodes --output 'jsonpath={range $.items[*]}{} {.spec.taints[*]}{"\n"}{end}'

Drain and cordon a node

Do this before deleting or reloading a node.

kubectl drain --ignore-daemonsets --force --delete-local-data $NODE_NAME

Show pods, sorted by creation time

Only descending sort is supported

kubectl get pods --sort-by=.metadata.creationTimestamp

To sort ascending you can use awk and tac (which is cat in reverse)

kubectl get pods --sort-by=.metadata.creationTimestamp |
awk 'NR == 1; NR > 1 {print $0 | "tac"}'

Show pods that are not running

kubectl get pods --all-namespaces --field-selector='status.phase!=Running' --sort-by=.metadata.creationTimestamp

Show pods that are terminating

Unfortunately "Terminating" shows up as a status, but is not a phase, so we have to jump through some hoops to show this list. Here's one way to do this:

kubectl get pods -A |
awk '$4 == "Terminating" {print $1,$2}' |
while read -r NS POD ; do
  kubectl get pod "$POD" -n "$NS" -o custom-columns=NAMESPACE:.metadata.namespace,,TERMINATION_GRACE_PERIOD:.spec.terminationGracePeriodSeconds
done |
column -t |
sort -u

And the output will be something like:

NAMESPACE                       NAME                                                TERMINATION_GRACE_PERIOD
otv-blazing-ray-3043            blazing-ray-3043-miner-7556f86b76-8mpdj            600
otv-gravitational-century-8705  gravitational-century-8705-miner-66b6dd97cc-c2mqq  600
otv-lunar-nova-0800             lunar-nova-0800-miner-86684cd6f8-d79wm             600

Show all images referenced by your k8s manifests

kubectl get pods --all-namespaces -o jsonpath="{..image}" |
tr -s '[[:space:]]' '\n' |
sort |
uniq -c |
sort -n

Watch what's going on in your cluster

watch kubectl get pods --all-namespaces -o wide

Watch events in a given namespace

kubectl -n kube-system get events --field-selector type=Warning -w

Or format the event messages with more useful information (really wide output)

kubectl get events -w -o custom-columns=FirstSeen:.firstTimestamp,LastSeen:lastTimestamp,Kind:involvedObject.kind,,Count:.count,From:.source.component,Type:.type,Reason:.reason,Message:.message

Show all containers for each pod matching a label

kubectl -n kube-system get pod -l k8s-app=kube-dns -o=jsonpath='{range .items[*]}{"\n"}{}{":\n\t"}{range .spec.containers[*]}{.name}{":\t"}{.image}{"\n\t"}{end}{"\n"}{end}'

Show logs for a given pod since N hours ago

kubectl logs $pod_name --since=12h

The --since arg can take [s]econds, [m]inutes and [h]ours. Longer durations should use --since-time=<rfc3339 timestamp>

Show logs for a given pod since a given date

The --since-time arg takes RFC3339 datetime. EG: 1991-08-03T13:31:46-07:00. This format requirement is strict, and is incompatible with the GNU date --rfc-3339=seconds output, which uses a space instead of a T to separate the full date from the full time, and +%FT%F%z, which does not include a colon between hours and minutes.

kubectl logs $pod_name --since-time="$(date --iso-8601=seconds -d '-5 weeks')"

Output custom column names

$ kubectl get pvc --all-namespaces -o custom-columns=','
NAME                   SIZE
foo-logs               256Gi
test-volume-2          1Gi
some-awesome-service   5Gi

$ kubectl get pods -o custom-columns=',START_TIME:.status.startTime,.spec.containers[0].env[?( == "GITLAB_USER_EMAIL")].value' | grep -E 'NAME|jobs'
NAME                                                 START_TIME             GITLAB_USER_EMAIL
runner-ppzmy1zx-project-11548552-concurrent-0q2pmk   2019-10-23T17:00:56Z
runner-ppzmy1zx-project-11548552-concurrent-1f7nfx   2019-10-23T17:04:27Z
runner-ppzmy1zx-project-11548552-concurrent-2n84rv   2019-10-23T17:04:19Z

Perform a restart of a service, daemonset or statefulset

kubectl rollout restart deployment $DEPLOYMENT_NAME

Run a cronjob out of schedule

kubectl create job --from=cronjob/download-cat-pix manual-run

Create a yaml file for a resource type

You can generate yaml for a variety of entities without having to create them on the server. Each entity requires different syntax, so you have to work through the error messages to get to a final solution.

$ kubectl create --dry-run=client -o yaml cronjob --schedule='15 * * * *' --image=image-name:1.2.3 job-name
apiVersion: batch/v1beta1
kind: CronJob
  creationTimestamp: null
  name: job-name
      creationTimestamp: null
      name: job-name
          creationTimestamp: null
          - image: image-name:1.2.3
            name: job-name
            resources: {}
          restartPolicy: OnFailure
  schedule: 15 * * * *
status: {}


The standard way to install k8s by yourself is to use kubeadm.

Manually on Ubuntu 16

## as root
swapoff -a #
curl -fsSL | apt-key add -
curl -fsSL | apt-key add -
echo "deb [arch=amd64] xenial stable" > /etc/apt/sources.list.d/docker.list
echo "deb kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt update
apt dist-upgrade -y
apt install -y apt-transport-https ca-certificates curl software-properties-common
apt install -y docker-ce
apt install -y kubelet kubeadm kubectl
kubeadm init

kubeadm init guide:


Kubernetes lets you resolve resources via DNS

Enable k8s dns logging

kubectl -n kube-system edit configmap coredns
## Add 'log' to the 'Corefile' config

DNS Entity map

  • Kubernetes Service: <service>.<namespace>.svc.cluster.local. (eg: httpbin.default.svc.cluster.local.)
kubectl get svc --all-namespaces -o jsonpath='{range .items[*]}{}{"."}{.metadata.namespace}{".svc.cluster.local.\n"}'
kubectl get svc --all-namespaces -o json | jq -r  '.items[] | "\(\(.metadata.namespace).svc.cluster.local."'
  • With core-dns you can run dig SRV +short *.*.svc.cluster.local. to get a list of all services.
  • Kubernetes service srv records: _${service_port_name}._${protocol}.${service}.${namespace}.svc.cluster.local. (eg: _http._tcp.httpbin.default.svc.cluster.local.)
  • Pod by IP address: ${pod_ip_address}.${namespace}.pod.cluster.local. (eg: 192-168-1-6.default.pod.cluster.local.)
  • Pod by name: ...?


crictl is a tool to inspect the local Container Runtime Interface (CRI)

user@k3snode:~$ sudo crictl pods
POD ID          CREATED       STATE  NAME                            NAMESPACE    ATTEMPT
688ecc2d9ce4d   2 weeks ago   Ready  log-fluentd-676d9d7c9d-ghz5x    default      8
ee1d8b0593e71   2 weeks ago   Ready  tiller-deploy-677f9cb999-rx6qp  kube-system  7
1153f4c0bd1f4   2 weeks ago   Ready  coredns-78fcdf6894-qsl74        kube-system  8
5be9c530c8cdc   2 weeks ago   Ready  calico-node-59spv               kube-system  10
d76d211830064   2 weeks ago   Ready  kube-proxy-cqdvn                kube-system  104
aa1679e0bfcca   2 weeks ago   Ready  kube-scheduler-s1               kube-system  10
ef64eea461bc0   2 weeks ago   Ready  kube-controller-manager-s1      kube-system  10
14ec5abe1e3ab   2 weeks ago   Ready  kube-apiserver-s1               kube-system  11
d4ce465a0942f   2 weeks ago   Ready  etcd-s1                         kube-system  10