diff --git a/docs/kubernetes/apps/nextcloud.md b/docs/kubernetes/apps/nextcloud.md
new file mode 100644
index 0000000000000000000000000000000000000000..95570bb6fdb264a6103e04af197553f38cdc444f
--- /dev/null
+++ b/docs/kubernetes/apps/nextcloud.md
@@ -0,0 +1,16 @@
+# Nextcloud on kubernetes
+
+https://medium.com/faun/nextcloud-scale-out-using-kubernetes-93c9cac9e493
+
+# Manual installation
+
+```
+export NEXTCLOUD_PASSWORD=$(pwgen -1) NEXTCLOUD_MARIADB_PASSWORD=$(pwgen -1) NEXTCLOUD_MARIADB_ROOT_PASSWORD=$(pwgen -1) ONLYOFFICE_JWT_SECRET=$(pwgen -1) ONLYOFFICE_POSTGRESQL_PASSWORD=$(pwgen -1) ONLYOFFICE_RABBITMQ_PASSWORD=$(pwgen -1)
+helmfile -b /usr/local/bin/helm -e oas -f /var/lib/OpenAppStack/source/helmfiles/helmfile.d/20-nextcloud.yaml apply
+```
+
+# Uninstall
+
+```
+helmfile -b /usr/local/bin/helm -e oas -f /var/lib/OpenAppStack/source/helmfiles/helmfile.d/20-nextcloud.yaml destroy
+```
diff --git a/docs/kubernetes/apps/smtp.md b/docs/kubernetes/apps/smtp.md
new file mode 100644
index 0000000000000000000000000000000000000000..f9f7ab3867b745d6c3451704fc96d10047f5977f
--- /dev/null
+++ b/docs/kubernetes/apps/smtp.md
@@ -0,0 +1,17 @@
+# SMTP charts
+
+## cloudposse/postfix
+
+https://hub.helm.sh/charts/cloudposse/postfix
+
+Uses [cloudposse/postfix](https://hub.docker.com/r/cloudposse/postfix) docker
+image: `ubuntu:14.04`, updated 2y ago, no builds.
+
+## halkeye/postfix
+
+https://hub.helm.sh/charts/halkeye/postfix
+https://github.com/halkeye-helm-charts/postfix
+Uses [applariat/tx-smtp-relay](https://hub.docker.com/r/applariat/tx-smtp-relay)
+docker image: No Dockerfile, last Update 2016!
+
+See also `~/Howtos/docker/smtp.md` for images
diff --git a/docs/kubernetes/cert-manager.md b/docs/kubernetes/cert-manager.md
index 35228600d379275c93c047bd6428107dd626fb42..fa378277bdc6273aadae70878975c82424d23597 100644
--- a/docs/kubernetes/cert-manager.md
+++ b/docs/kubernetes/cert-manager.md
@@ -11,16 +11,22 @@
 
 [Docs: Uninstall](https://docs.cert-manager.io/en/latest/tasks/uninstall/kubernetes.html)
 
-    helm delete --purge oas-test-cert-manager
-    kubectl delete namespace cert-manager
+```
+helm delete --purge oas-test-cert-manager
+kubectl delete namespace cert-manager
+```
 
 i.e. for 0.9.1:
 
-    kubectl delete -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml
+```
+kubectl delete -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml
+```
 
 Verify there's no CRDs left:
 
-    kc get crd --all-namespaces | grep -v calico
+```
+kc get crd --all-namespaces | grep -v calico
+```
 
 ### Troubleshooting
 
@@ -28,11 +34,15 @@ Verify there's no CRDs left:
 
 Custom debug script:
 
-    ~/bin/custom/k8s_debug_cert_manager.sh
+```
+~/bin/custom/k8s_debug_cert_manager.sh
+```
 
 [cmctl](https://cert-manager.io/docs/reference/cmctl/):
 
-    cmctl status certificate --namespace matrix matrix.varac.net
+```
+cmctl status certificate --namespace matrix matrix.varac.net
+```
 
 ## Alternatives to letsencrypt
 
@@ -55,7 +65,9 @@ Cert-manager + ZeroSSL resources:
 
 #### API
 
-    curl https://api.zerossl.com/certificates\?access_key=$zerossl_api_key
+```
+curl https://api.zerossl.com/certificates\?access_key=$zerossl_api_key
+```
 
 ##### ZeroSSL issues
 
diff --git a/docs/kubernetes/cronjobs.md b/docs/kubernetes/cronjobs.md
new file mode 100644
index 0000000000000000000000000000000000000000..7a8113324618fbb1987e7a7a831eff6711d311b4
--- /dev/null
+++ b/docs/kubernetes/cronjobs.md
@@ -0,0 +1,3 @@
+# CronJobs
+
+- [Docs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/)
diff --git a/docs/kubernetes/databases/postgresql.md b/docs/kubernetes/databases/postgresql.md
index 63aa01f7eb030bcc1fd12329fa6e8454c88cce61..9d4b50e9fb6ee2d6d03757e141e24f93bba2da4e 100644
--- a/docs/kubernetes/databases/postgresql.md
+++ b/docs/kubernetes/databases/postgresql.md
@@ -1,11 +1,10 @@
 # Postgresql on Kubernetes
 
-* [Bitnami chart](https://github.com/bitnami/charts)
+- [Bitnami chart](https://github.com/bitnami/charts)
 
 ## Manually change the postgres user password
 
-* [Modify the default administrator password](https://docs.bitnami.com/aws/infrastructure/postgresql/administration/change-reset-password/)
-
+- [Modify the default administrator password](https://docs.bitnami.com/aws/infrastructure/postgresql/administration/change-reset-password/)
 
 ## Upgrade postgres major version
 
@@ -13,66 +12,74 @@
 
 Preferred solution: [Dump and restore](https://www.postgresql.org/docs/current/backup.html)
 
-* [How to properly handle major version upgrades?](https://github.com/bitnami/charts/issues/1798)
-* [bitnami docs: Create and restore PostgreSQL backups](https://docs.bitnami.com/ibm/infrastructure/django/administration/backup-restore-postgresql/)
-* [Dump and restore a postgresql database on kubernetes](https://www.adyxax.org/blog/2020/06/25/dump-and-restore-a-postgresql-database-on-kubernetes/)
+- [How to properly handle major version upgrades?](https://github.com/bitnami/charts/issues/1798)
+- [bitnami docs: Create and restore PostgreSQL backups](https://docs.bitnami.com/ibm/infrastructure/django/administration/backup-restore-postgresql/)
+- [Dump and restore a postgresql database on kubernetes](https://www.adyxax.org/blog/2020/06/25/dump-and-restore-a-postgresql-database-on-kubernetes/)
 
 Issues / considerations:
 
-* Don't use text file format, but custom file format (`-Fc`),
+- Don't use text file format, but custom file format (`-Fc`),
   otherwise it could lead to text file issues like dos line endings and `\N`
   issues.
-* Also, avoid piping from pod to local filesystem because of the above issue.
+- Also, avoid piping from pod to local filesystem because of the above issue.
 
 #### Manual dumps
 
 Option 0: Use `pg_dumpall` like the automated [bitname postgresql backup cronjob](https://github.com/bitnami/charts/blob/main/bitnami/postgresql/values.yaml#L1133):
 From inside the postgres container ()
 
-    export PGDUMP_DIR=/var/lib/postgresql/data
-    pg_dumpall --clean --if-exists --load-via-partition-root \
-      --quote-all-identifiers --no-password \
-      --file=${PGDUMP_DIR}/pg_dumpall-$(date '+%Y-%m-%d-%H-%M').pgdump"
+```
+export PGDUMP_DIR=/var/lib/postgresql/data
+pg_dumpall --clean --if-exists --load-via-partition-root \
+  --quote-all-identifiers --no-password \
+  --file=${PGDUMP_DIR}/pg_dumpall-$(date '+%Y-%m-%d-%H-%M').pgdump"
+```
 
 Option 1: Dump directly on pg host:
 
-    export PG_PASSWORD=$(kubectl get secret -n plausible plausible-postgresql \
-      -o jsonpath="{.data.postgres-password}" | base64 --decode)
-    echo $PG_PASSWORD
-    kubectl -n plausible exec -ti plausible-postgresql-0 -- pg_dump -Fc --host \
-      plausible-postgresql -U postgres -d plausible -f /tmp/plausible.dump
-    kubectl -n plausible cp plausible-postgresql-0:/tmp/plausible.dump /tmp/plausible.dump
+```
+export PG_PASSWORD=$(kubectl get secret -n plausible plausible-postgresql \
+  -o jsonpath="{.data.postgres-password}" | base64 --decode)
+echo $PG_PASSWORD
+kubectl -n plausible exec -ti plausible-postgresql-0 -- pg_dump -Fc --host \
+  plausible-postgresql -U postgres -d plausible -f /tmp/plausible.dump
+kubectl -n plausible cp plausible-postgresql-0:/tmp/plausible.dump /tmp/plausible.dump
+```
 
 Option 2: Dump from dedicated pg client pod (WIP, still not working since the pod
 terminates after pg_dump)
 
-    export PG_PASSWORD=$(kubectl get secret -n plausible plausible-postgresql \
-      -o jsonpath="{.data.postgres-password}" | base64 --decode)
-    export PG_IMAGE=$(kubectl -n plausible get pod plausible-postgresql-0 \
-      -o jsonpath="{.spec.containers[0].image}")
-    kubectl run db-postgresql-client --rm --tty -i --restart='Never' \
-      --namespace plausible --image $PG_IMAGE --env="PGPASSWORD=$PG_PASSWORD" \
-      --command -- \
-      pg_dump -Fc --host plausible-postgresql -U postgres -d plausible -f /tmp/plausible.dump
+```
+export PG_PASSWORD=$(kubectl get secret -n plausible plausible-postgresql \
+  -o jsonpath="{.data.postgres-password}" | base64 --decode)
+export PG_IMAGE=$(kubectl -n plausible get pod plausible-postgresql-0 \
+  -o jsonpath="{.spec.containers[0].image}")
+kubectl run db-postgresql-client --rm --tty -i --restart='Never' \
+  --namespace plausible --image $PG_IMAGE --env="PGPASSWORD=$PG_PASSWORD" \
+  --command -- \
+  pg_dump -Fc --host plausible-postgresql -U postgres -d plausible -f /tmp/plausible.dump
 
-    kubectl -n plausible cp plausible-postgresql-0:/tmp/plausible.dump /tmp/plausible.dump
+kubectl -n plausible cp plausible-postgresql-0:/tmp/plausible.dump /tmp/plausible.dump
+```
 
-* Remove Helmrelease
-* Remove Postgres PVC
-* Upgrade helmrelease to new Postgres major version
-* Scale down deployments/statefulsets (i.e. Plausible, Plausible-clickhouse )
+- Remove Helmrelease
+- Remove Postgres PVC
+- Upgrade helmrelease to new Postgres major version
+- Scale down deployments/statefulsets (i.e. Plausible, Plausible-clickhouse )
 
 #### Restore
 
-    kubectl -n plausible cp /tmp/plausible.dump plausible-postgresql-0:/tmp/plausible.dump
+```
+kubectl -n plausible cp /tmp/plausible.dump plausible-postgresql-0:/tmp/plausible.dump
 
-    kubectl -n plausible exec -ti plausible-postgresql-0 -- \
-      dropdb -U postgres plausible
-    kubectl -n plausible exec -ti plausible-postgresql-0 -- \
-      createdb -U postgres plausible
-    kubectl -n plausible exec -ti plausible-postgresql-0 -- \
-      pg_restore -U postgres -d plausible /tmp/plausible.dump
+kubectl -n plausible exec -ti plausible-postgresql-0 -- \
+  dropdb -U postgres plausible
+kubectl -n plausible exec -ti plausible-postgresql-0 -- \
+  createdb -U postgres plausible
+kubectl -n plausible exec -ti plausible-postgresql-0 -- \
+  pg_restore -U postgres -d plausible /tmp/plausible.dump
+```
 
 ### pg_upgrade
 
-* [Upgrade in-place using pg_upgrade](https://github.com/bitnami/charts/issues/8025#issuecomment-964906018)
+- [Upgrade in-place using pg_upgrade](https://github.com/bitnami/charts/issues/8025#issuecomment-964906018)
diff --git a/docs/kubernetes/development.md b/docs/kubernetes/development.md
new file mode 100644
index 0000000000000000000000000000000000000000..452197cda2a14e4a8b5bfcf531797bbef7e54198
--- /dev/null
+++ b/docs/kubernetes/development.md
@@ -0,0 +1,7 @@
+# Develop on kubernetes
+
+## Skaffold
+
+<https://skaffold.dev/>
+<https://github.com/GoogleContainerTools/skaffold>
+<https://skaffold.dev/docs/>
diff --git a/docs/kubernetes/dns.md b/docs/kubernetes/dns.md
index 5d0f5f0255516d6038e080e851d19b343156dd7c..7a3fc2bc53bd96af55c715aa33e21b80310f991c 100644
--- a/docs/kubernetes/dns.md
+++ b/docs/kubernetes/dns.md
@@ -5,15 +5,19 @@
 
 ## Test coredns
 
-    kubectl -n kube-system get pods -l k8s-app=kube-dns
-    kubectl -n kube-system get service kube-dns
-    dig @10.43.0.10 ix.de
-    dig @10.43.0.10 helm-operator.oas.svc.cluster.local
+```
+kubectl -n kube-system get pods -l k8s-app=kube-dns
+kubectl -n kube-system get service kube-dns
+dig @10.43.0.10 ix.de
+dig @10.43.0.10 helm-operator.oas.svc.cluster.local
+```
 
 ## K8s DNS troubleshooting
 
 [Debugging DNS Resolution](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/)
 
-    kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
-    kubectl get pods dnsutils
-    kubectl exec -i -t dnsutils -- nslookup kubernetes.default
+```
+kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
+kubectl get pods dnsutils
+kubectl exec -i -t dnsutils -- nslookup kubernetes.default
+```
diff --git a/docs/kubernetes/events.md b/docs/kubernetes/events.md
new file mode 100644
index 0000000000000000000000000000000000000000..84c5fda8342efc4788cbc676045c8625aef9a5c8
--- /dev/null
+++ b/docs/kubernetes/events.md
@@ -0,0 +1,37 @@
+# Kubernetes events
+
+- [Aggregate kubernetes events](https://open.greenhost.net/openappstack/openappstack/-/issues/706) with eventrouter
+- Open k8s FR [Implement alternative storage destination (for events)](https://github.com/kubernetes/kubernetes/issues/19637)
+
+Get events for specific pod💹
+
+```
+kubectl get event -w -n oas --field-selector involvedObject.name=prometheus-stack
+```
+
+## Eventrouter
+
+- Migration is on it's way to [ openshift/eventrouter](https://github.com/openshift/eventrouter)
+
+- Deprecated: [heptiolabs/eventrouter](https://github.com/heptiolabs/eventrouter)
+
+- [Grafana recommends eventrouter](https://grafana.com/blog/2020/07/21/loki-tutorial-how-to-send-logs-from-eks-with-promtail-to-get-full-visibility-in-grafana/)
+
+- It seems abondoned, but from what ppl say it's [still working fine](https://github.com/heptiolabs/eventrouter/issues/126#issuecomment-781289073).
+
+- The chart is in the process of [beeing moved to the bitnami charts](https://github.com/bitnami/charts/pull/4698) (see also https://github.com/heptiolabs/eventrouter/issues/121).
+
+## kubewatch
+
+Watch k8s events and trigger Handlers
+
+https://github.com/bitnami-labs/kubewatch
+
+- Last commit 2020-07 (as of 2021-03)
+- [no matrix support so far](https://github.com/bitnami-labs/kubewatch/issues/245)
+
+## Other alternatives
+
+- [heapster eventer](https://github.com/kubernetes-retired/heapster/tree/master/events) - deprecated
+- [redhat openshift eventrouter](https://docs.okd.io/latest/logging/cluster-logging-eventrouter.html)
+- [kube-eventer](https://github.com/AliyunContainerService/kube-eventer) - maintained, but [no stdout sink which can be used by promtail](https://github.com/AliyunContainerService/kube-eventer/issues/181)
diff --git a/docs/kubernetes/external-secrets.md b/docs/kubernetes/external-secrets.md
index 7e45865057f5338939e9e744b59acc74b7e9cab5..ade2c3438ed51d6af54c91679c832720150e8389 100644
--- a/docs/kubernetes/external-secrets.md
+++ b/docs/kubernetes/external-secrets.md
@@ -1,9 +1,11 @@
 # external-secrets
 
-* [Website/Docs](https://external-secrets.io/)
-* [Github](https://github.com/external-secrets/external-secrets)
-* [Helm chart](https://artifacthub.io/packages/helm/external-secrets-operator/external-secrets)
+- [Website/Docs](https://external-secrets.io/)
+- [Github](https://github.com/external-secrets/external-secrets)
+- [Helm chart](https://artifacthub.io/packages/helm/external-secrets-operator/external-secrets)
 
 [Trigger refresh](https://external-secrets.io/v0.8.1/introduction/faq/#can-i-manually-trigger-a-secret-refresh)
 
-    kubectl -n mastodon annotate es mastodon-sendgrid force-sync=$(date +%s) --overwrite
+```
+kubectl -n mastodon annotate es mastodon-sendgrid force-sync=$(date +%s) --overwrite
+```
diff --git a/docs/kubernetes/gke.md b/docs/kubernetes/gke.md
index 3dbae6c09b3877f9aec85cce23a3c06568c25646..5d4c797491216a397873651d4d1ba2453b6f4890 100644
--- a/docs/kubernetes/gke.md
+++ b/docs/kubernetes/gke.md
@@ -1,18 +1,45 @@
-# GKE
+# Google kubernetes engine
 
-* [changes to kubectl authentication coming in GKE v1.26](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke)
+<https://console.cloud.google.com>
 
-## Monitoring
+- [Trial options](https://console.cloud.google.com/freetrial/signup)
 
-* [gcp-exporter](https://github.com/DazWilkin/gcp-exporter)
-* [Prometheus metrics for Cloud SQL](https://github.com/GoogleCloudPlatform/cloud-sql-proxy#support-for-metrics-and-tracing)
-* [Google Stackdriver Prometheus Exporter](https://github.com/prometheus-community/stackdriver_exporter)
-* [Example: Use postgres_exporter with GCP cloud SQL](https://github.com/prometheus-community/postgres_exporter/issues/477)
+## Node images
 
-## GKE Versionen
+<https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#cos>
 
-Show available versions:
+## gcloud
 
-    gcloud container get-server-config --zone europe-west3-c | yq '.'
-    gcloud container get-server-config --zone europe-west3-c | yq '.channels[] | select(.channel=="REGULAR")'
-    gcloud container get-server-config --zone europe-west3-c | yq '.channels[] | select(.channel=="RAPID")'
+```
+gcloud auth login
+gcloud config set project level-pattern-290811
+```
+
+### Install a cluster with gke
+
+<https://cloud.google.com/kubernetes-engine/docs/quickstart>
+
+```
+gcloud container clusters create cluster-name --num-nodes=1
+```
+
+### Configure kubectl for cluster
+
+```
+gcloud container clusters get-credentials oas-test1 --zone europe-west6-a --project level-pattern-290811
+```
+
+### Deploy hello world app
+
+```
+kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
+kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080
+
+kubectl get service hello-server
+```
+
+Note: You might need to wait several minutes before the Service's external IP address populates
+
+```
+curl http://34.65.49.245
+```
diff --git a/docs/kubernetes/helm/documentation.md b/docs/kubernetes/helm/documentation.md
index b9bd84e0fed548081afa9fe69201c33e590c3412..5cdc3d58081da47707aa56053fc59d13c415f40e 100644
--- a/docs/kubernetes/helm/documentation.md
+++ b/docs/kubernetes/helm/documentation.md
@@ -2,12 +2,14 @@
 
 ## helm-docs
 
-* <https://github.com/norwoodj/helm-docs>
-* Much more commits, contributors etc than `frigate`
+- <https://github.com/norwoodj/helm-docs>
+- Much more commits, contributors etc than `frigate`
 
 Install:
 
-    brew install norwoodj/tap/helm-docs
+```
+brew install norwoodj/tap/helm-docs
+```
 
 ## Frigate
 
diff --git a/docs/kubernetes/helm/publishing.md b/docs/kubernetes/helm/publishing.md
index 938932f9b55eb2d46ceee589ea547234a3cb3af4..870241cafd85f709be54a628676b198fefdfc0fa 100644
--- a/docs/kubernetes/helm/publishing.md
+++ b/docs/kubernetes/helm/publishing.md
@@ -1,12 +1,12 @@
 # Publish helm charts
 
-* [Complete guide for creating and publishing Helm chart with gh pages](https://ankush-chavan.medium.com/complete-guide-for-creating-and-deploying-helm-chart-423ba8e1f0e6)
+- [Complete guide for creating and publishing Helm chart with gh pages](https://ankush-chavan.medium.com/complete-guide-for-creating-and-deploying-helm-chart-423ba8e1f0e6)
 
 ## Artifacthub
 
-* [Docs](https://artifacthub.io/docs/)
-* [Changelog](https://blog.artifacthub.io/blog/changelogs/)
-  * [Helm annotations](https://artifacthub.io/docs/topics/annotations/helm/)
-  * [Helm annotation example](https://artifacthub.io/docs/topics/annotations/helm/#example)
-* [Helm charts repositories](https://artifacthub.io/docs/topics/repositories/)
-* [cli tool](https://artifacthub.io/docs/topics/cli/)
+- [Docs](https://artifacthub.io/docs/)
+- [Changelog](https://blog.artifacthub.io/blog/changelogs/)
+  - [Helm annotations](https://artifacthub.io/docs/topics/annotations/helm/)
+  - [Helm annotation example](https://artifacthub.io/docs/topics/annotations/helm/#example)
+- [Helm charts repositories](https://artifacthub.io/docs/topics/repositories/)
+- [cli tool](https://artifacthub.io/docs/topics/cli/)
diff --git a/docs/kubernetes/helm/testing.md b/docs/kubernetes/helm/testing.md
index 584ff7ec6d0fe06c991b839de7a2bb72b508e4d4..41bf9124814b1f48d575053b4bc092176de1641e 100644
--- a/docs/kubernetes/helm/testing.md
+++ b/docs/kubernetes/helm/testing.md
@@ -4,29 +4,41 @@
 
 See existing tests:
 
-    grep 'helm.sh/hook.*test' -ir ~/kubernetes/charts
+```
+grep 'helm.sh/hook.*test' -ir ~/kubernetes/charts
+```
 
 ## Example
 
-    cd /home/varac/kubernetes/charts/wekan/charts.git/wekan
+```
+cd /home/varac/kubernetes/charts/wekan/charts.git/wekan
+```
 
 Start kind cluster with ingress setup
 
-    kind_create.sh
+```
+kind_create.sh
+```
 
 Install:
 
-    helm install --set ingress.path=/ wekan .
+```
+helm install --set ingress.path=/ wekan .
+```
 
 Test:
 
-    helm test wekan
+```
+helm test wekan
+```
 
 ### "ct", chart-testing tool
 
 <https://github.com/helm/chart-testing>
 
-    brew install chart-testing
+```
+brew install chart-testing
+```
 
 Can also test a [local repo](https://github.com/helm/chart-testing#local-repo).
 
diff --git a/docs/kubernetes/iam.md b/docs/kubernetes/iam.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a6980bcbb9a30fa15427205de7248303a4e115a
--- /dev/null
+++ b/docs/kubernetes/iam.md
@@ -0,0 +1,21 @@
+# IAM options
+
+- [Gluu blog: keycloak vs gluu](https://www.gluu.org/blog/gluu-versus-keycloak/)
+
+# Keycloak
+
+https://www.keycloak.org/
+https://github.com/keycloak
+
+OpenID Connect, OAuth 2.0, and SAML.
+
+- Cons:
+  - [No U2F](https://issues.jboss.org/browse/KEYCLOAK-6558), scheduled for 5.0
+  - Jira issue tracker
+
+# Gluu
+
+https://www.gluu.org/
+https://github.com/GluuFederation/gluu-docker
+
+-
diff --git a/docs/kubernetes/ingress/ingress.md b/docs/kubernetes/ingress/ingress.md
index 229644c5dfd506815168c2c33d8d62c7e4dd09c2..61004d2627fdeba0633672a4233539fb60f25e15 100644
--- a/docs/kubernetes/ingress/ingress.md
+++ b/docs/kubernetes/ingress/ingress.md
@@ -26,14 +26,18 @@ Other resources, non of them worked for my setup though:
 You can use the `tcp` and `udp` [helm chart config options](https://github.com/helm/charts/tree/master/stable/nginx-ingress#configuration)
 like this:
 
-    tcp:
-      8080: "moewe/unifi-controller:8080"
+```
+tcp:
+  8080: "moewe/unifi-controller:8080"
+```
 
 or this:
 
-     udp:
-      10001: "moewe/unifi-discovery:10001"
-      3478: "moewe/unifi-stun:3478"
+```
+ udp:
+  10001: "moewe/unifi-discovery:10001"
+  3478: "moewe/unifi-stun:3478"
+```
 
 **BUT** you can mix them, if will produce the following error when applying a
 mixed TCP+UDP helmfile:
diff --git a/docs/kubernetes/k3s.md b/docs/kubernetes/k3s.md
index b3b1b3539b07f02afa797713287061f45521e2bd..e3ae9f98399f9a1dc356211dfe8cec83463c9657 100644
--- a/docs/kubernetes/k3s.md
+++ b/docs/kubernetes/k3s.md
@@ -10,12 +10,16 @@
 
 ### curl2bash
 
-    curl -sfL https://get.k3s.io | sh -
-    sudo k3s kubectl get node
+```
+curl -sfL https://get.k3s.io | sh -
+sudo k3s kubectl get node
+```
 
 [Uninstall](https://docs.k3s.io/installation/uninstall):
 
-    /usr/local/bin/k3s-uninstall.sh
+```
+/usr/local/bin/k3s-uninstall.sh
+```
 
 ### k3s wrappers / installers
 
@@ -74,9 +78,11 @@ Storage: `/var/lib/rancher/k3s/storage/`
 
 <https://github.com/rancher/k3os/issues/133>
 
-    cd ~/kubernetes/os/k3os
-    wget https://github.com/rancher/k3os/releases/download/v0.9.0/k3os-amd64.iso
-    ./install.sh
+```
+cd ~/kubernetes/os/k3os
+wget https://github.com/rancher/k3os/releases/download/v0.9.0/k3os-amd64.iso
+./install.sh
+```
 
 - login with user `rancher`
 - sudo k3os install
@@ -84,8 +90,10 @@ Storage: `/var/lib/rancher/k3s/storage/`
 
 ### Running
 
-    virsh start --console k3os
-    ssh k3os
+```
+virsh start --console k3os
+ssh k3os
+```
 
 - ssh only accepts public keys
 
diff --git a/docs/kubernetes/k9s.md b/docs/kubernetes/k9s.md
index 6a2264d7a329ea3bbb1dd1be555af6782cd38a8e..875e61f7600d77b78962ef123c2814cacf216548 100644
--- a/docs/kubernetes/k9s.md
+++ b/docs/kubernetes/k9s.md
@@ -4,12 +4,16 @@
 
 Install binary from GH:
 
-    cd /tmp && wget https://github.com/derailed/k9s/releases/download/v0.27.4/k9s_Linux_amd64.tar.gz \
-      && tar -xzf k9s_Linux_amd64.tar.gz && mv k9s /usr/local/bin
+```
+cd /tmp && wget https://github.com/derailed/k9s/releases/download/v0.27.4/k9s_Linux_amd64.tar.gz \
+  && tar -xzf k9s_Linux_amd64.tar.gz && mv k9s /usr/local/bin
+```
 
 Using brew:
 
-    brew install derailed/k9s/k9s
+```
+brew install derailed/k9s/k9s
+```
 
 ## Usage
 
diff --git a/docs/kubernetes/kind.md b/docs/kubernetes/kind.md
new file mode 100644
index 0000000000000000000000000000000000000000..a80994f626bca56f05cbf64e50bdfa422fa56fdf
--- /dev/null
+++ b/docs/kubernetes/kind.md
@@ -0,0 +1,33 @@
+# kind
+
+- Helps you run Kubernetes clusters locally
+  and in CI pipelines using Docker containers as "nodes".
+
+- CNCF certified Kubernetes installer
+
+- [kind website](https://kind.sigs.k8s.io)
+
+- [kind releases](https://github.com/kubernetes-sigs/kind/releases)
+
+Install: `brew install kind`
+
+- [Available k8s version images tags](https://hub.docker.com/r/kindest/node/tags)
+
+Start cluster without ingress-nginx:
+
+```sh
+time kind create cluster --image=vkindest/node:1.22.4 --config=/home/varac/kubernetes/kind/cluster-config-ingress.yml
+docker ps
+```
+
+Start cluster with [ingress setup](https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx):
+
+```sh
+kind_ingress_create.sh
+```
+
+Delete cluster:
+
+```sh
+kind delete cluster
+```
diff --git a/docs/kubernetes/kubernetes.md b/docs/kubernetes/kubernetes.md
index 76673141aacb7710266f4d825cf0724de192f9d8..be4fb450baf775f1b875b8594405a49bddf80fa9 100644
--- a/docs/kubernetes/kubernetes.md
+++ b/docs/kubernetes/kubernetes.md
@@ -1,17 +1,21 @@
 # Docs
 
-* Awesome lists: see `../awesome-container-lists.md`
+- Awesome lists: see `../awesome-container-lists.md`
 
 Etc:
 
-* [0xacab infrastructure/platform_wg wiki](https://0xacab.org/infrastructure/platform_wg/k8/wikis/home)
-* [0xacab riseup/hexacab/kubernetes wiki](https://0xacab.org/riseup/hexacab/kubernetes/wikis/home)
+- [0xacab infrastructure/platform_wg wiki](https://0xacab.org/infrastructure/platform_wg/k8/wikis/home)
+- [0xacab riseup/hexacab/kubernetes wiki](https://0xacab.org/riseup/hexacab/kubernetes/wikis/home)
 
 ## Dashboard
 
-    kubectl proxy
-    https://localhost:8001/ui
+```
+kubectl proxy
+https://localhost:8001/ui
+```
 
 ## Kubernetes and Puppet
 
-    https://puppet.com/blog/managing-kubernetes-configuration-puppet
+```
+https://puppet.com/blog/managing-kubernetes-configuration-puppet
+```
diff --git a/docs/kubernetes/lens.md b/docs/kubernetes/lens.md
new file mode 100644
index 0000000000000000000000000000000000000000..8ad9f875bbf74db754c25fba9a090e19d04e4465
--- /dev/null
+++ b/docs/kubernetes/lens.md
@@ -0,0 +1,13 @@
+# lens
+
+<https://docs.k8slens.dev/main/getting-started/>
+
+## Install
+
+### snap
+
+Beware: [snap is lacking a major version behind](https://github.com/lensapp/lens/issues/5093)
+
+```
+sudo snap install kontena-lens --classic
+```
diff --git a/docs/kubernetes/limitations.md b/docs/kubernetes/limitations.md
new file mode 100644
index 0000000000000000000000000000000000000000..9e6c7c592593c64da8f1a95c64172571b724cefe
--- /dev/null
+++ b/docs/kubernetes/limitations.md
@@ -0,0 +1,10 @@
+# Kubernetes limitations
+
+## UDP Broadcasts
+
+[metallb: Listen to UDP broadcast traffic](https://github.com/metallb/metallb/issues/344):
+
+> Kubernetes's LoadBalancer logic (which MetalLB relies on) does not support
+> forwarding broadcast traffic into pods, so this simply will not work in k8s,
+> sorry. The only option you have is to run the pods with hostNetwork=true and
+> try to wrangle something with that, but you won't get stable IPs with that setup.
diff --git a/docs/kubernetes/logging.md b/docs/kubernetes/logging.md
new file mode 100644
index 0000000000000000000000000000000000000000..8908ce3296903dc42d7630091010c264e14a819d
--- /dev/null
+++ b/docs/kubernetes/logging.md
@@ -0,0 +1,55 @@
+# Kubernetes logging
+
+https://kubernetes.io/docs/concepts/cluster-administration/logging/
+
+# EFK: elasticsearch, fluent-bit and kibana
+
+https://medium.com/@jbsazon/aggregated-kubernetes-container-logs-with-fluent-bit-elasticsearch-and-kibana-5a9708c5dd9a
+
+## Fluent-bit
+
+- https://docs.fluentbit.io/manual/installation/kubernetes
+- [fluent-but helm chart](https://hub.helm.sh/charts/stable/fluent-bit)
+
+## Elasticsearch
+
+- [Official helm chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch)
+  ([stable/elasticsearch](https://github.com/helm/charts/tree/master/stable/elasticsearch) is deprecated).
+
+Watch all cluster members come up.
+
+```
+kubectl get pods --namespace=logging -l app=elasticsearch-master -w
+```
+
+Test cluster health using Helm test.
+
+```
+helm test elasticsearch
+```
+
+## Kibana
+
+- [Official elastic chart](https://hub.helm.sh/charts/elastic/kibana) -
+  ([stable/kibana](https://hub.helm.sh/charts/stable/kibana) is deprecated)
+
+# Fluentd
+
+https://hub.helm.sh/charts/bitnami/fluentd
+
+To verify that Fluentd has started, run:
+
+```
+kubectl get all -l "app.kubernetes.io/name=fluentd,app.kubernetes.io/instance=fluentd" --all-namespaces
+```
+
+Logs are captured on each node by the forwarder pods and then sent to the aggregator pods. By default, the aggregator pods send the logs to the standard output.
+You can see all the logs by running this command:
+
+```
+kubectl -n logging logs -l "app.kubernetes.io/component=aggregator"
+```
+
+# Loki
+
+see `../../logging/loki.md`
diff --git a/docs/kubernetes/metallb.md b/docs/kubernetes/metallb.md
index 64c4d6121583822f18561156edaef483d0671d1e..20bbae26f0ecdff4d3cbc8ac36695b77d77592d3 100644
--- a/docs/kubernetes/metallb.md
+++ b/docs/kubernetes/metallb.md
@@ -1,4 +1,4 @@
 # Metallb
 
-* [Website](https://metallb.universe.tf/)
-* [Helm chart](https://artifacthub.io/packages/helm/bitnami/metallb)
+- [Website](https://metallb.universe.tf/)
+- [Helm chart](https://artifacthub.io/packages/helm/bitnami/metallb)
diff --git a/docs/kubernetes/minikube.md b/docs/kubernetes/minikube.md
new file mode 100644
index 0000000000000000000000000000000000000000..25ca6e6be7bea9c1a5ea59f9d6f368a60e56c925
--- /dev/null
+++ b/docs/kubernetes/minikube.md
@@ -0,0 +1,44 @@
+# Minikube
+
+- [Github](https://github.com/kubernetes/minikube)
+- [Docs](https://minikube.sigs.k8s.io/docs/)
+
+## Installation
+
+Options:
+
+- Brew: `brew install minikube`
+- Snaps (<https://snapcraft.io/minikube>) are out of date.
+- [Download binaries](https://minikube.sigs.k8s.io/docs/start/)
+
+## Setup
+
+Completion:
+
+```
+minikube completion zsh > ~/.zsh/completion/_minikube
+```
+
+### Choose driver
+
+- [driver](https://minikube.sigs.k8s.io/docs/drivers/)
+- Default driver is docker
+
+#### kvm2
+
+```
+minikube config set vm-driver kvm2
+```
+
+## Usage
+
+```
+minikube start
+minikube delete
+```
+
+### Persistent Volumes
+
+```
+/tmp/hostpath-provisioner/
+```
diff --git a/docs/kubernetes/mixin.md b/docs/kubernetes/mixin.md
new file mode 100644
index 0000000000000000000000000000000000000000..09f04dc5496e1c2a2ba86ebe8957dfc66b452f6c
--- /dev/null
+++ b/docs/kubernetes/mixin.md
@@ -0,0 +1,29 @@
+# Prometheus alerts for Kubernetes
+
+https://github.com/kubernetes-monitoring/kubernetes-mixin
+
+## Setup
+
+Install `jsonnet-bundler` and `jsonnet` following `../../coding/jsonnet.md`
+
+## Produce alerts, rules and dashboards
+
+```
+cd ~/projects/monitoring/prometheus/kubernetes-mixin
+jb install
+
+make prometheus_alerts.yaml
+make prometheus_rules.yaml
+make dashboards_out
+
+
+yq '.groups[].name' < prometheus_alerts.yaml
+yq '.groups[0].rules[].alert' < prometheus_alerts.yaml
+yq '.groups[0].rules' < prometheus_alerts.yaml
+```
+
+## Produce ansible template-safe output
+
+```
+sed -e '/{{/i {%- raw %}' -e'/{{/a {% endraw %}'  prometheus_alerts.yaml >> ~/oas/openappstack/ansible/roles/apps/templates/settings/prometheus.yaml
+```
diff --git a/docs/kubernetes/monitoring-k8s.md b/docs/kubernetes/monitoring-k8s.md
new file mode 100644
index 0000000000000000000000000000000000000000..d71d05e0a3ff1c3f5bfc82dfd5d0162447e19401
--- /dev/null
+++ b/docs/kubernetes/monitoring-k8s.md
@@ -0,0 +1,25 @@
+# Cluster health checks
+
+https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/
+
+```
+kubectl cluster-info
+kubectl get nodes
+```
+
+## API checks
+
+https://kubernetes.io/docs/reference/using-api/health-checks/
+
+```
+curl -k https://localhost:6443/healthz
+..
+```
+
+or
+
+```
+kubectl get --raw='/healthz?verbose'
+kubectl get --raw='/readyz?verbose'
+kubectl get --raw='/livez?verbose'
+```
diff --git a/docs/kubernetes/network/network.md b/docs/kubernetes/network/network.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ab7d7bc0271d741f309989052b682b52edcc6c5
--- /dev/null
+++ b/docs/kubernetes/network/network.md
@@ -0,0 +1,35 @@
+# Kubernetes networking
+
+## CoreDNS
+
+- [Config docs](https://coredns.io/manual/toc/#configuration)
+
+- Config in configmap `kube-system/coredns`
+
+## Traffic viewer
+
+<https://getmizu.io/>
+
+## Network debug pod
+
+- [Github](https://github.com/wbitt/Network-MultiTool)
+
+[How to use it in k8s](https://github.com/wbitt/Network-MultiTool#kubernetes-1):
+
+Create single pod - without a deployment:
+
+```
+$ kubectl run multitool --image=wbitt/network-multitool
+```
+
+Create a deployment:
+
+```
+$ kubectl create deployment multitool --image=wbitt/network-multitool
+```
+
+Then:
+
+```
+$ kubectl exec -it multitool /bin/bash
+```
diff --git a/docs/kubernetes/network/traffic-monitoring.md b/docs/kubernetes/network/traffic-monitoring.md
new file mode 100644
index 0000000000000000000000000000000000000000..4899233925a6320830ae94ae9e7d6bd8302de214
--- /dev/null
+++ b/docs/kubernetes/network/traffic-monitoring.md
@@ -0,0 +1,5 @@
+# K8s traffic monitoring
+
+Capture traffic from all posts
+
+mizu tap -A
diff --git a/docs/kubernetes/operators.md b/docs/kubernetes/operators.md
new file mode 100644
index 0000000000000000000000000000000000000000..f18a32801fdd2b49db69a43a7148bf33bb6573bb
--- /dev/null
+++ b/docs/kubernetes/operators.md
@@ -0,0 +1,24 @@
+# Kubernetes operators
+
+https://operatorhub.io/what-is-an-operator
+https://operatorhub.io/
+https://www.operatorcon.io/
+
+## Overview
+
+https://github.com/leszko/build-your-operator
+
+## Operator sdk
+
+https://sdk.operatorframework.io/
+
+- ansible, helm, go
+
+## Operator frameworks
+
+- [KOPF for python](https://github.com/nolar/kopf): Most popular framework
+
+## Operator lifecycle manager (OLM)
+
+https://github.com/operator-framework/operator-lifecycle-manager
+https://operator-framework.github.io/olm-book/
diff --git a/docs/kubernetes/os.md b/docs/kubernetes/os.md
new file mode 100644
index 0000000000000000000000000000000000000000..fdd785e6aa2bc779e1a9e012debffc3bbf03850a
--- /dev/null
+++ b/docs/kubernetes/os.md
@@ -0,0 +1,168 @@
+# Container optimized OSes
+
+- [Top Minimal Container Operating Systems for Kubernetes](https://computingforgeeks.com/minimal-container-operating-systems-for-kubernetes/)
+- [A Guide to Linux Operating Systems for Kubernetes](https://thenewstack.io/a-guide-to-linux-operating-systems-for-kubernetes/)
+- https://www.reddit.com/r/devops/comments/iz4zf9/is_there_an_open_source_container_optimized_os/
+
+## Talos
+
+- [Talos](https://www.talos.dev/)
+  The Kubernetes Operating System
+- No SSH, no console, only API
+- [No auto-upgrade](https://www.talos.dev/v1.3/talos-guides/upgrading-talos/#talosctl-upgrade)
+- [terraform-libvirtd-talos](https://codeberg.org/x33u/terraform-libvirtd-talos)
+- Most popular k8s distribution besides k3s in the k8s-at-home community
+
+## Flatcar
+
+- [Flatcar website](https://flatcar-linux.org/)
+- Kinvolk, the company behind Flatcar got [acquired by Microsoft](https://kinvolk.io/blog/2021/04/microsoft-acquires-kinvolk/)
+- Flatcar Container Linux is a drop-in replacement for CoreOS Container Linux
+- [Running Flatcar Container Linux on libvirt](https://flatcar-linux.org/docs/latest/installing/vms/libvirt/)
+  - [Terraform and libvirt](https://flatcar-linux.org/docs/latest/installing/vms/libvirt/)
+- [Terraform](https://flatcar-linux.org/docs/latest/provisioning/terraform/)
+
+Features:
+
+- [minimal amount of tools to run container workloads](https://www.flatcar.org/docs/latest/container-runtimes/): Docker, Kubernetes
+  - Open feature request: [Flatcar Podman extension](https://github.com/flatcar/Flatcar/issues/112)
+- Automated atomic updates
+- Immutable filesystem
+- OS image is immutable (/usr is a read-only partition and
+  there’s no package manager to install packages)
+- Flatcar uses the USR-A and USR-B update mechanism, first introduced by ChromeOS
+
+## Photon OS
+
+- [Website](https://vmware.github.io/photon/)
+- By vmware
+- [No qcow2 images](https://github.com/vmware/photon/wiki/Downloading-Photon-OS)
+
+## Fedora CoreOS (FCOS)
+
+[Fedora CoreOS](https://getfedora.org/en/coreos?stream=stable)
+is the official successor to CoreOS Container Linux
+
+- [Docs](https://docs.fedoraproject.org/en-US/fedora-coreos/)
+- [Issues](https://github.com/coreos/fedora-coreos-tracker)
+
+Features:
+
+- automatically-updating
+
+### Installation
+
+FCOS reads and applies the configuration file with Ignition.
+
+- [What is ignition ?](https://coreos.com/ignition/docs/latest/what-is-ignition.html)
+  https://github.com/coreos/ignition
+
+#### Install with libvirt
+
+https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/#\_launching_with_qemu_or_libvirt
+
+#### Ignition file
+
+https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/
+
+```
+docker pull quay.io/coreos/fcct:release
+docker run -i --rm quay.io/coreos/fcct:release --pretty --strict < varac.fcc > varac.ign
+```
+
+#### Install
+
+```
+cd ~/kubernetes/os/coreos
+wget https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/31.20200210.3.0/x86_64/fedora-coreos-31.20200210.3.0-qemu.x86_64.qcow2.xz
+unxz fedora-coreos-31.20200210.3.0-qemu.x86_64.qcow2.xz
+./install.sh
+```
+
+#### Run
+
+Questions:
+
+- sudo ?
+
+Running procs after boot
+
+- systemd
+  - init
+  - systemd-journal
+  - systemd-logind
+- NetworkManager
+- chronyd
+- sssd (System Security Services Daemon)
+  - sssd_be
+  - sssd_nss
+- dbus-broker-launch / dbus-broker
+- sshd
+- zincati (OS update daemon)
+- polkitd
+- dhclient
+- agetty
+
+## Kairos
+
+- [Website](https://kairos.io/)
+
+## BalenaOS
+
+- [BalenaOS](https://www.balena.io/os)
+
+## Out of scope
+
+- [Google Container-Optimized OS](https://cloud.google.com/container-optimized-os/docs)
+- [AWS Bottlerocket](https://aws.amazon.com/de/bottlerocket/)
+  Open Source OS for Container Hosting from Amazon
+
+## Deprecated / outdates OSes
+
+### Flow Linux
+
+- [Floe Linux](https://floelinux.github.io/)
+  Floe is a lightweight Linux distribution made specifically to run Linux containers.
+  It uses Tiny Core Linux, runs completely from RAM and is a ~25 MB download\*.
+
+### Kutter OS
+
+- [Kutter OS](https://kutter-os.github.io/)
+  The aim is to make a minimal OS for running Kubernetes.
+
+### boot2podman
+
+- [boot2podman](https://boot2podman.github.io/)
+
+### HypriotOS
+
+- [HypriotOS](https://blog.hypriot.com/about/#hypriotos:6083a88ee3411b0d17ce02d738f69d47):
+  make container technology a first class citizen on ARM and IoT devices
+
+### k3os
+
+- see `./k3os.md`
+- [The project is dead but ready to get into contributor mode](https://github.com/rancher/k3os/issues/846)
+
+### CoreOS Container Linux
+
+from https://coreos.com/os/eol/:
+
+> End-of-life announcement for CoreOS Container Linux
+> On May 26, 2020, CoreOS Container Linux will reach its end of life and will no longer receive updates. We strongly recommend that users begin migrating their workloads to another operating system as soon as possible.
+
+https://coreos.com/os/docs/latest/
+
+From https://en.wikipedia.org/wiki/Container_Linux:
+
+> Container Linux (formerly CoreOS Linux) is an open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of application deployment, security, reliability and scalability
+
+### Rancher OS
+
+- [Docs](https://rancher.com/docs/os/v1.x/)
+
+> Note that RancherOS 1.x is currently in a maintain-only-as-essential mode, and it is no longer being actively maintained at a code level other than addressing critical or security fixes.
+
+## Unrelated
+
+- [kured](https://github.com/kubereboot/kured): Kubernetes Reboot Daemon
diff --git a/docs/kubernetes/rbac.md b/docs/kubernetes/rbac.md
index 11983ad550dbae01bf067aef3b7a507e72f86854..256ed5ab92aa9fa4d7cda1babd6239218778094d 100644
--- a/docs/kubernetes/rbac.md
+++ b/docs/kubernetes/rbac.md
@@ -1,3 +1,3 @@
 # K8S RBAC
 
-* [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
+- [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
diff --git a/docs/kubernetes/resources.md b/docs/kubernetes/resources.md
index 32c29e1b8593daab1bb47187463a798efad9e593..00725d6dac4812847774599f8443770bdf7eb548 100644
--- a/docs/kubernetes/resources.md
+++ b/docs/kubernetes/resources.md
@@ -4,7 +4,9 @@
 
 <https://github.com/kubernetes/kubernetes/issues/17512>
 
-    kubectl describe nodes | grep 'Name:\|Allocated' -A 5 | grep 'Name\|memory'
+```
+kubectl describe nodes | grep 'Name:\|Allocated' -A 5 | grep 'Name\|memory'
+```
 
 ## Pod requests and limits
 
@@ -14,16 +16,20 @@
 
 Current Cpu/Mem usage:
 
-    $ kubectl -n mastodon top pod -l app.kubernetes.io/name=elasticsearch --sum=true
+```
+$ kubectl -n mastodon top pod -l app.kubernetes.io/name=elasticsearch --sum=true
+```
 
 The equivalent prometheus query for the above kubectl cmd:
 
-    export MATCH='namespace="mastodon", pod=~".*elasticsearch.*"
+```
+export MATCH='namespace="mastodon", pod=~".*elasticsearch.*"
+```
 
-* Be aware: `image!=""` for memory queries does make a difference !
+- Be aware: `image!=""` for memory queries does make a difference !
 
-| Query     | CPU (mil cores)  | Mem (Mb)    |
+| Query | CPU (mil cores) | Mem (Mb) |
 |---------------- | --------------- | --------------- |
-| Current metrics  | `sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{$MATCH}) by (pod)*1024` | `sum(container_memory_working_set_bytes{$MATCH, image!=''}/1024^2) by (pod)` |
+| Current metrics | `sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{$MATCH}) by (pod)*1024` | `sum(container_memory_working_set_bytes{$MATCH, image!=''}/1024^2) by (pod)` |
 | Average resource usage over time | `sum(rate(container_cpu_usage_seconds_total{$MATCH}[1w])) by (pod)*1024` | `avg_over_time(sum(container_memory_working_set_bytes{namespace="mastodon", pod=~".*elasticsearch.*", image!=""}) by (pod)[1w:])/1024^2` |
 | Total sum over time | `sum(rate(container_cpu_usage_seconds_total{$MATCH}[1w]))*1024` | `sum(avg_over_time(sum(container_memory_working_set_bytes{$MATCH})[1w:]))/1024^2` |
diff --git a/docs/kubernetes/secrets/secrets.md b/docs/kubernetes/secrets/secrets.md
index d98154022353c19ed8a79f7d50a6563bc7aeeceb..649d084c4533f8d4598123d4b3ef3c28ae984143 100644
--- a/docs/kubernetes/secrets/secrets.md
+++ b/docs/kubernetes/secrets/secrets.md
@@ -7,13 +7,19 @@
 ### sealed-secrets
 
 - [github](https://github.com/bitnami-labs/sealed-secrets)
+
 - [flux with sealed-secrets](https://fluxcd.io/docs/guides/sealed-secrets)
+
 - [helm chart](https://artifacthub.io/packages/helm/bitnami-labs/sealed-secrets)
+
 - [helm chart source](https://github.com/bitnami-labs/sealed-secrets/tree/main/helm/sealed-secrets)
+
 - [example ca.crt to test local encryption](https://raw.githubusercontent.com/181192/local-k8s-env/master/ca.crt)
+
 - [Tutorial](https://www.arthurkoziel.com/encrypting-k8s-secrets-with-sealed-secrets/)
 
 - cli-tool: `brew install kubeseal`
+
 - Sealed Secrets decrypts the secret server-side, like sops
 
 sealed-secrets vs. sops:
diff --git a/docs/kubernetes/security.md b/docs/kubernetes/security.md
new file mode 100644
index 0000000000000000000000000000000000000000..c62680816288500e11837af5b923201bcf75df73
--- /dev/null
+++ b/docs/kubernetes/security.md
@@ -0,0 +1,49 @@
+# Kubernetes security
+
+https://rancher.com/blog/2020/kubernetes-security-vulnerabilities/
+
+https://geekflare.com/kubernetes-security-scanner/
+
+## Test suites
+
+- [kube-linter](https://github.com/stackrox/kube-linter/blob/main/.pre-commit-hooks.yaml)
+- [datree](https://github.com/datreeio/datree)
+
+### kube-bench
+
+https://github.com/aquasecurity/kube-bench
+
+On the node:
+
+```
+wget https://github.com/aquasecurity/kube-bench/releases/download/v0.5.0/kube-bench_0.5.0_linux_amd64.deb
+apt install ./kube-bench_0.5.0_linux_amd64.deb
+wget -O cfg/config.yaml https://raw.githubusercontent.com/aquasecurity/kube-bench/main/cfg/config.yaml
+kube-bench
+```
+
+### kube-hunter
+
+https://github.com/aquasecurity/kube-hunter
+
+On the node:
+
+```
+pip3 install --user kube-hunter
+~/.local/bin/kube-hunter
+```
+
+### kubectl-kubesec
+
+https://github.com/controlplaneio/kubectl-kubesec
+
+```
+kubectl krew install kubesec-scan
+kubectl kubesec-scan
+
+kubectl kubesec-scan -n varac pod thelounge-57cc74b65f-m8sc4
+```
+
+## Etc
+
+https://github.com/derailed/popeye
diff --git a/docs/kubernetes/stackspin/aws.md b/docs/kubernetes/stackspin/aws.md
new file mode 100644
index 0000000000000000000000000000000000000000..521648294034e7e04489e761802e56c247237231
--- /dev/null
+++ b/docs/kubernetes/stackspin/aws.md
@@ -0,0 +1,51 @@
+# Running Stackspin on AWS
+
+## k3s installation
+
+You can run OAS on an AWS EC2 node but you need to consider the following
+limitations:
+
+### Provide ansible with external IP addr
+
+An AWS EC2 node by itself doesn't know it's public assigned IP addr.
+Therefore you need to provide it in the installation/upgrade step like this:
+
+```
+python -m stackspin gl.varac.net install --ansible-param '-e ip_address=52.58.18.134'
+```
+
+### metallb
+
+Because `metallb` doesn't work on AWS nodes, remmove the `--disable=metallb`
+from the k3s startup parameters:
+
+```
+vi /etc/systemd/system/k3s.service
+systemctl daemon-reload
+systemctl restart k3s.service
+```
+
+## Stackspin installation
+
+### metallb
+
+Stackspin uses [metallb](https://metallb.universe.tf/) as load balancer, mostly
+because we want the ingress controller [ingress-nginx](https://kubernetes.github.io/ingress-nginx)
+to know about the external IP so it can get configured to [block or allow certain
+IP ranges](https://open.greenhost.net/openappstack/openappstack/-/issues/660).
+This works fine in certain environments like a plain VPS but
+[metallb won't work on AWS or other cloud providers](https://metallb.universe.tf/installation/clouds).
+
+The solution is to use [k3s integrated service load balancer](https://rancher.com/docs/k3s/latest/en/networking/#service-load-balancer)
+instead of `metallb`. You can achieve this by overriding adding the following to
+your `$CLUSTERDIR/group_vars/all/settings.yml` file:
+
+```
+k3s:
+  version: 'v1.18.6+k3s1'
+  server_args: '--disable traefik --disable local-storage'
+```
+
+!Attention! There's currently no easy way to disable/opt-out of metallb,
+see <https://open.greenhost.net/stackspin/stackspin/-/issues/720> for more
+details.
diff --git a/docs/kubernetes/stackspin/flux.md b/docs/kubernetes/stackspin/flux.md
new file mode 100644
index 0000000000000000000000000000000000000000..63a2a0bb9ea05cf20b0372f7a771437c8324a79a
--- /dev/null
+++ b/docs/kubernetes/stackspin/flux.md
@@ -0,0 +1,29 @@
+## Stackspin flux
+
+- Install/configure cluster with Ansible:
+
+  python3 -m openappstack oas.varac.net install --install-kubernetes --no-install-openappstack
+
+- Copy and edit `.flux.env`:
+
+  export CLUSTER_DIR=~/oas/openappstack/clusters/oas.varac.net
+  export KUBECONFIG=${CLUSTER_DIR}/kube_config_cluster.yml
+  cp install/.flux.env.example ${CLUSTER_DIR}/.flux.env
+  edit ${CLUSTER_DIR}/.flux.env
+
+- Apply `secret/oas-cluser-variables` (generated from `.flux.env`)
+
+  cp install/kustomization.yaml ${CLUSTER_DIR}
+  kubectl apply -k ${CLUSTER_DIR}
+  kubectl view-secret -n flux-system oas-cluster-variables
+  kubectl view-secret -n flux-system oas-cluster-variables ip_address
+
+- Initial OAS installation
+
+  ./install/install-openappstack.sh
+
+- Install apps:
+
+  bash ./install/install-app.sh wekan
+
+- Restore
diff --git a/docs/kubernetes/stackspin/stackspin.md b/docs/kubernetes/stackspin/stackspin.md
new file mode 100644
index 0000000000000000000000000000000000000000..f9b0706974439a5d07d3037368b6bfe83a5e3f77
--- /dev/null
+++ b/docs/kubernetes/stackspin/stackspin.md
@@ -0,0 +1,139 @@
+# stackspin
+
+## Create new cluster
+
+<https://docs.stackspin.net/en/latest/installation/create_cluster.html>
+
+```
+export CLUSTER_NAME=varac-test2 CLUSTER_DOMAIN=stackspin.net
+```
+
+A. Create config for existing host:
+
+```
+python3 -m stackspin ${CLUSTER_NAME}.${CLUSTER_DOMAIN} create --ip-address 10.27.64.131 --create-hostname ${CLUSTER_NAME} --subdomain ${CLUSTER_DOMAIN}
+```
+
+B. Create config and droplet using the Greenhost API:
+
+```
+python3 -m stackspin ${CLUSTER_NAME}.${CLUSTER_DOMAIN} create --create-droplet --ssh-key-id 407 --create-domain-records --subdomain ${CLUSTER_NAME} ${CLUSTER_DOMAIN}
+```
+
+Install kubernetes cluster:
+
+```
+python -m stackspin ${CLUSTER_NAME}.${CLUSTER_DOMAIN} install
+```
+
+## Install stackspin
+
+### Install kubernetes cluster
+
+<https://docs.stackspin.net/en/v0.7/installation/install_oas.html>
+
+Edit stackspin config file:
+
+```
+cd stackspin/stackspin
+export CLUSTER_DIR=/home/varac/stackspin/stackspin/clusters/${CLUSTER_NAME}.${CLUSTER_DOMAIN}
+export KUBECONFIG=$CLUSTER_DIR/kube_config_cluster.yml
+cp install/.flux.env.example $CLUSTER_DIR/.flux.env
+vi  $CLUSTER_DIR/.flux.env
+```
+
+Create secret/stackspin-cluster-variables:
+
+```
+cp install/kustomization.yaml $CLUSTER_DIR/
+kubectl get namespace flux-system 2>/dev/null || kubectl create namespace flux-system
+kubectl apply -k $CLUSTER_DIR
+```
+
+### Install / update stackspin
+
+Install stackspin
+
+```
+./install/install-stackspin.sh
+```
+
+Install apps
+
+```
+./install/install-app.sh monitoring
+...
+```
+
+Run testinfra tests:
+
+<https://docs.stackspin.net/en/v0.7/troubleshooting.html#testinfra-tests>
+
+```
+cd test
+export CLUSTER_DIR=/home/varac/stackspin/clusters/stackspin.varac.net
+py.test -sv --ansible-inventory=${CLUSTER_DIR}/inventory.yml --hosts='ansible://*'
+```
+
+Run taiko tests:
+
+<https://docs.stackspin.net/en/v0.7/troubleshooting.html#taiko-tests>
+
+...
+
+## Prometheus on stackspin.varac.net
+
+Edit Custom prometheus config at
+`~/stackspin/stackspin/clusters/stackspin.varac.net/group_vars/all/prometheus.yml`.
+
+Redeploy prometheus config-secret afterwards:
+
+```
+cd ~/stackspin/stackspin
+python3 -m stackspin stackspin.varac.net install --ansible-param='--tags=prometheus'
+```
+
+Secret `prometheus-settings` in namespace `stackspin` contains the new config:
+
+```
+kubectl view-secret -n stackspin prometheus-settings values.yaml
+```
+
+But the configmap `prometheus-server` didn't update, see the latest timestamp:
+
+```
+kubectl -n stackspin get cm prometheus-server -o jsonpath='{range .metadata.managedFields[*]}{.operation}{"\t"}{.time}{"\n"}{end}'
+```
+
+In order to update this configmap we need to force a helm upgrade for
+prometheus. Because there are no git changes which the helm-operator can act
+upon, there's no other way than to restart helm-operator pod:
+
+```
+kubectl -n stackspin rollout restart deployment flux-custom
+```
+
+# Restart helm-operator
+
+```
+kubectl -n stackspin rollout restart deployment helm-operator
+```
+
+# hydra / single-sign-on
+
+- [.well-known/openid-configuration](https://sso.stackspin.greenhost.net/.well-known/openid-configuration) for client auto-configuration
+
+# Prom queries
+
+List of pods and container mem usage, sorted by pod:
+
+```
+prom_query -o json 'sort_desc(round(avg by (pod, container) (container_memory_working_set_bytes{container!=""})/1024/1024))' | jq -r '( .[] | "\(.metric.pod),\(.metric.container),\(.value[1])" )' | column -t | sort --field-separator=',' --key=2 | sed 's/^/[ ],/' | sed '1s/^/-,Pod,Container,Mem util\n/' | csvlook
+```
+
+## Locally template charts based on Stackspin defaults
+
+```bash
+yq '.data' ~/stackspin/stackspin/flux2/core/base/single-sign-on/kratos-values-configmap.yaml | tail -n +2 > /tmp/cm.yaml
+helm -n test template -f /tmp/cm.yaml kratos-test ory/kratos
+```
diff --git a/docs/kubernetes/storage/cifs.md b/docs/kubernetes/storage/cifs.md
new file mode 100644
index 0000000000000000000000000000000000000000..6044774fc436bd65daf73d3f2a373308dc00acc7
--- /dev/null
+++ b/docs/kubernetes/storage/cifs.md
@@ -0,0 +1,64 @@
+# CIFS storage on kubernetes
+
+## kubernetes-csi / csi-driver-smb
+
+<https://github.com/kubernetes-csi/csi-driver-smb>
+
+- As of 2020-09: Actively maintained (22 contributors, recent releases, 34
+  stars)
+- [Helm chart](https://github.com/kubernetes-csi/csi-driver-smb/tree/master/charts)
+
+### Test pod
+
+<https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/e2e_usage.md>
+<https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/driver-parameters.md>
+
+- Test if cifs share works: `smbclient -U oas2_nucy //varac-nas/oas2_nucy`
+  (pw see `server/varac-nas/user/oas2_nucy`)
+- `kubectl -n varac create secret generic test-smb-creds --from-literal username=k8s --from-literal password="..."`
+
+## juliohm CIFS volume driver
+
+<https://k8scifsvol.juliohm.com.br/>
+<https://github.com/juliohm1978/kubernetes-cifs-volumedriver>
+
+- As of 2021-07: Actively maintained (6 contributors, recent releases, 77 ⭐)
+
+- No helm chart, installation seems a bit bit clunky:
+
+- requires `cifs-utils` installed on the host
+
+  ```
+  apt-get install -y cifs-utils jq
+  cd kubernetes-cifs-volumedriver/
+  kubectl apply -f install.yaml
+  ```
+
+Verify:
+
+```
+kubectl -n default exec -it juliohm-cifs-volumedriver-installer-m96zq ls /flexmnt/juliohm~cifs/cifs
+```
+
+Remove provisioning container:
+
+```
+kubectl delete -f install.yaml
+```
+
+## Deprecated drivers
+
+### Azure CIFS/SMB FlexVolume driver for Kubernetes (Deprecated)
+
+> WARNING: This driver is in maintenance mode. Please use SMB CSI driver instead.
+
+- <https://github.com/Azure/kubernetes-volume-drivers/tree/master/flexvolume/smb>
+- [helm chart](https://github.com/Azure/kubernetes-volume-drivers/tree/master/flexvolume/smb/helm)
+
+### fstab/cifs Flexvolume Plugin for Kubernetes
+
+<https://labs.consol.de/kubernetes/2018/05/11/cifs-flexvolume-kubernetes.html>
+<https://github.com/fstab/cifs>
+
+- As of 2020-09: last commit was 2019-12 (2 contributors, no recent releases, 80
+  stars)
diff --git a/docs/kubernetes/storage/local-path.md b/docs/kubernetes/storage/local-path.md
new file mode 100644
index 0000000000000000000000000000000000000000..9c51ce9f3e07600c56dd07e229e3e603d490d7c5
--- /dev/null
+++ b/docs/kubernetes/storage/local-path.md
@@ -0,0 +1,14 @@
+# Local path provisioners
+
+## Rancher local path provisioner
+
+- [GitHub](https://github.com/rancher/local-path-provisioner)
+
+### Issues
+
+- [Not working for subPath](https://github.com/rancher/local-path-provisioner/issues/4)
+  - [Suggestion to switch to `openebs/dynamic-localpv-provisioner`](https://github.com/rancher/local-path-provisioner/issues/4#issuecomment-1315596924)
+
+## Other local path provisioners
+
+- [openebs/dynamic-localpv-provisioner](https://github.com/openebs/dynamic-localpv-provisioner)
diff --git a/docs/kubernetes/storage/nfs.md b/docs/kubernetes/storage/nfs.md
new file mode 100644
index 0000000000000000000000000000000000000000..3ade9d5a8af40ed704776018c9f829aa65790922
--- /dev/null
+++ b/docs/kubernetes/storage/nfs.md
@@ -0,0 +1,15 @@
+# NFS storage on kubernetes
+
+- <https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html>
+- <https://josebiro.medium.com/shared-nfs-and-subpaths-in-kubernetes-19ade234896b>
+
+## nfs-client-provisioner
+
+<https://www.padok.fr/en/blog/readwritemany-nfs-kubernetes>
+
+Downside:
+
+- [No user/pw authentication, only host access control](https://github.com/kubernetes-retired/external-storage/issues/1265)
+  <https://unix.stackexchange.com/questions/341854/failed-to-pass-credentials-to-nfs-mount>
+
+  or maybe it works passing `user=…,pass=…` as mount options (see <https://unix.stackexchange.com/a/494912>) ?
diff --git a/docs/kubernetes/storage/pvc.md b/docs/kubernetes/storage/pvc.md
new file mode 100644
index 0000000000000000000000000000000000000000..c69a32a0bdedffeb255d7b8cf3f1b2ee5967b44a
--- /dev/null
+++ b/docs/kubernetes/storage/pvc.md
@@ -0,0 +1,31 @@
+# Access modes
+
+<https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes>
+
+## Set PV reclaim policy
+
+### Manually
+
+<https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/>
+
+```
+kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+```
+
+### Programatically by using a custom storage class
+
+<https://medium.com/faun/kubernetes-how-to-set-reclaimpolicy-for-persistentvolumeclaim-7eb7d002bb2e>
+
+# Delete persistant volumes
+
+```
+kubectl -n media delete --wait=false pvc emby-config
+kubectl -n media delete --wait=false pv  pvc-c70d703d-cb60-11e9-bf6b-00505600c5be
+```
+
+PVs will change to `Terminating` state and will wait for finalizers.
+Remove all finalizers from the pv metadata:
+
+```
+kubectl patch -p '{"metadata":{"finalizers": []}}' --type=merge pv pvc-d8de8930-bae6-11e9-a969-00505600c5be pvc-ce18ae68-c8fe-11e9-8a83-00505600c5be
+```
diff --git a/docs/kubernetes/testing.md b/docs/kubernetes/testing.md
new file mode 100644
index 0000000000000000000000000000000000000000..29334e8fc4c9d3e26796a2bf936119132bbe3ff5
--- /dev/null
+++ b/docs/kubernetes/testing.md
@@ -0,0 +1,63 @@
+# Kubernetes testing
+
+[testing-kubernetes-infrastructure](https://www.inovex.de/blog/testing-kubernetes-infrastructure/)
+[Run custom tests on cluster](https://open.greenhost.net/stackspin/stackspin/-/issues/715)
+
+## Testing pyramid
+
+### Static analysis
+
+- kube-score
+- kube-lint
+- yamllint
+- `kubectl apply -f <file> --dry-run --validate=true`
+
+Dry-run:
+
+- `kubectl apply -f <file> --server-dry-run`
+
+### Unit tests
+
+## Tools / frameworks
+
+https://blog.flant.com/small-local-kubernetes-comparison/
+https://thechief.io/c/editorial/k3d-vs-k3s-vs-kind-vs-microk8s-vs-minikube/
+
+- minikube: see `./minikube.md`
+  only run a single node in the local Kubernetes cluster
+- Kind: see `./kind.md`
+- [k3s](https://github.com/k3s-io/k3s)
+  - [k3d](https://k3d.io) is a platform-agnostic, lightweight wrapper
+    that runs K3s in a docker container.
+- [microk8s](https://microk8s.io/): Created by Canonical,
+  microK8S is a Kubernetes distribution designed to run
+  fast, self-healing, and highly available Kubernetes clusters
+
+### Sonobouy
+
+- [sonobuoy](https://sonobuoy.io/)
+- [docs](https://sonobuoy.io/docs)
+- [Fast and Easy Sonobuoy Plugins for Custom Testing of Almost Anything](https://tanzu.vmware.com/content/blog/fast-and-easy-sonobuoy-plugins-for-custom-testing-of-almost-anything)
+
+Usage:
+
+```
+cd ~/kubernetes/testing/sonobuoy
+sonobuoy run --plugin hello-world.yaml --wait
+outfile=$(sonobuoy retrieve) && tar -xf $outfile -C results && cat results/plugins/debug/results/*/out*
+sonobuoy delete --wait
+```
+
+### Kubetest2, gomega & ginko
+
+- [End-to-End Testing with kubetest2, ginko and gomega](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/e2e-tests-kubetest2.md)
+- [kubetest2](https://github.com/kubernetes-sigs/kubetest2)
+  - [no user docs for kubetest2](https://github.com/kubernetes-sigs/kubetest2/issues/134)
+- [gomega](https://github.com/onsi/gomega)
+- [ginko](https://github.com/onsi/ginkgo)
+
+### etc
+
+- [Conformance Testing in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
+- [Testing and Kubernetes](https://mechanicalrock.github.io/2018/11/07/kubernetes-testing-e2e.html)
+- [netassert](https://github.com/controlplaneio/netassert)