NeonPandora¶
Goal: Create a representative Kubernetes cluster with:
- a setup which a dev team could use
- set up (internal) docker repository
- (internal) git repository
- create a Quarkus-based application
- build the application in the cluster at check-in
- continuous deployment
- collect metrics from the application
- get alerts
About these slides¶
These slides are written with Jupyter Notebook for ease of integrating with bash command line.
When looking at the slides the %%bash
-sections might be truncated. However, if you copy the contents it will work fine.
import notebook
print("These slides were made with notebook version:")
print(notebook.__version__)
Tooling - this gets used today¶
Install:
- https://docs.docker.com/install/
- https://kubernetes.io/docs/tasks/tools/install-kubectl/
- (Contained in kubectl: https://kubectl.docs.kubernetes.io/references/kustomize/)
- https://github.com/kubernetes-sigs/kind
- https://fluxcd.io/docs/installation/#install-the-flux-cli
- https://github.com/mikefarah/yq
- https://quarkus.io/get-started/
Note: On Mac you will need gnu-sed. To avoid forgetting this, the scripts here use gsed
, and on Linux you need to replace this with sed
- or create an alias.
On Linux, yq might be go-yq
.
Brew installation¶
brew install kubectl
brew install fluxcd/tap/flux
brew install kind
brew install gsed
brew install yq
brew install kustomize
sdk install quarkus
kubectl kustomize
is the same as kustomize build
. However, we are at one point going to do kustomize create --autodetect .
, which is not supported from kubectl
directly.
Which versions to expect after installation¶
%%bash
kind --version # Should yield at least 0.20.0
docker ps # Should not give any errors
kubectl version # Should at least give 1.29
flux version --client # Should at least give 2.2.2
yq --version # 4.40.5
quarkus --version # Expect all 3.x.x to be OK
You might need to adjust the github config¶
git config --global push.default
Making life easier¶
Enable code completion for at least kubectl and flux:
alias kc="kubectl"
source <( kubectl completion zsh) # setup autocomplete in zsh
source <( kubectl completion zsh | sed 's|_kubectl|_kc|g' )
source <( flux completion zsh)
You add this to your .zshrc
file.
If you have changed your default branch in git, you will probably run into naming issues with this presentation. If the following gives you any output, you want to comment out the setting whilst following this tutorial:
%%bash
git config --global --get init.defaultBranch
Docker based Kubernetes with Kind¶
Using a local docker registry is a bit tricky. The following is based on instructions for Kind: https://kind.sigs.k8s.io/docs/user/local-registry/
%%bash
docker run \
-d --restart=always -p "127.0.0.1:5001:5000" \
--network bridge --name "kind-registry" \
registry:2
This will start the registry as a separate docker container.
Notice that it is started without a backing volume, so a deletion of the docker image will remove the images.
Kubernetes standard for local registries suggested: https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
Create the Kind cluster¶
(NB remember git clone git@github.com:nostra/neon.git
)
%%bash
kind create cluster --config kind-api-cluster.yaml --name=neonpandora
Test that it runs OK
%%bash
kubectl config get-contexts
%%bash
kubectl get pods -A
Connect up the local Kind regisitry¶
%%bash
REGISTRY_DIR="/etc/containerd/certs.d/localhost:5001"
for node in $(kind get nodes --name=neonpandora); do
docker exec "${node}" mkdir -p "${REGISTRY_DIR}"
cat <<EOF | docker exec -i "${node}" cp /dev/stdin "${REGISTRY_DIR}/hosts.toml"
[host."http://kind-registry:5000"]
EOF
done
Connect network
%%bash
docker network connect "kind" kind-registry
Kind registry config¶
Make Kubernetes inside the Kind cluster use registry from the docker container
%%bash
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:5001"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
Ssh server in Kind¶
Create ssh keys for git
%%bash
ssh-keygen -f ~/.ssh/fluxpres -N "" -C "Key used for flux presentation"
As the different git repositories represent different access levels, we would use different keys in a real life scenario.
Bootstrapping sshd service, and use the key there:
%%bash
pushd dockerimage/sshd/
cp ~/.ssh/fluxpres.pub .
docker build -t neon.local.gd:5001/flux_sshd:v1 .
%%bash
docker push neon.local.gd:5001/flux_sshd:v1
Start the sshd server, in which we will store our git repos:
%%bash
kubectl create ns management
kubectl create -n management -k base/sshd
GitOps with Flux¶
First create repository on the ssh-server inside the Kubernetes cluster:
%%bash
ssh -i ~/.ssh/fluxpres -p 31022 fluxpres@neon.local.gd \
git init --bare git/neonflux.git < /dev/null
If you have trouble with the command above, you need:
# Edit ~/.ssh/config
Host neon.local.gd
User fluxpres
IdentityFile ~/.ssh/fluxpres
Clone and install flux system¶
%%bash
mkdir -p ~/scrap/work
git clone \
ssh://fluxpres@neon.local.gd:31022/home/fluxpres/git/neonflux.git \
~/scrap/work/neonflux
cp -r flux $HOME/scrap/work/neonflux/.
cp -r base $HOME/scrap/work/neonflux/.
cp .gitignore $HOME/scrap/work/neonflux/.gitignore
... and enable Flux¶
%%bash
pushd $HOME/scrap/work/neonflux
flux install \
--components=source-controller,kustomize-controller,helm-controller,notification-controller \
--components-extra=image-reflector-controller,image-automation-controller \
--export > ./flux/system/gotk-components.yaml &&
git add . ; git commit -a -m "Initial commit" ; git push
popd
... Flux needs secret to read git¶
Add ssh key as secret to flux system. Notice that the known_hosts
file gets adjusted. In a later example, we just skip the host key verification.
%%bash
pushd ~/scrap/work/neonflux
flux create secret git flux-system \
--url=ssh://fluxpres@neon.local.gd:31022/home/fluxpres/git/flux-system.git \
--private-key-file=$HOME/.ssh/fluxpres --namespace=flux-system \
--export > flux-system-secret.yaml
echo "Correct sshd hostname to what it is inside cluster"
export POD=$( kubectl get pods -n management -l app=sshd -o yaml | yq '.items[].metadata.name' )
export LINE=$( kubectl exec -t -n management $POD -- ssh-keyscan sshd.management | grep sha2 | gsed '/./,$!d' )
gsed -i 's|\(known_hosts:\).*|\1 '"$LINE"'|g' flux-system-secret.yaml
mv flux-system-secret.yaml flux/system/flux-system-secret.yaml
git add .
git commit -a -m "Add secret and sync configurations" && git push
popd
Notice that the secret would be encrypted with Mozilla SOPS or Sealed Secrets in a real life scenario in order to avoid storing plain-text secrets in git. Or placed in a vault.
Bootstrap synchronization of flux-system.
%%bash
kubectl create -k ~/scrap/work/neonflux/flux/system
%%bash
kubectl -n flux-system get pods --watch
%%bash
flux get kustomization -A
Webhook to trigger Flux upon check-in¶
FluxCD has a number of ways to set up web hooks.
Token can be generated with head -c 12 /dev/urandom | shasum | cut -d ' ' -f1
, but I'm not regenerating it for this example.
%%bash
WEBHOOK_PATH=$(kubectl get -n flux-system Receiver flux-webhook -o yaml|yq eval '.status.webhookPath'| cut -c 2-)
gsed -i 's|\(WEBHOOK_PATH=\).*|\1'"$WEBHOOK_PATH"'|g' post-webhook.sh
%%bash
scp -i ~/.ssh/fluxpres -P 31022 post-webhook.sh \
fluxpres@neon.local.gd:/home/fluxpres/git/neonflux.git/hooks/post-receive
ssh -i ~/.ssh/fluxpres -p 31022 fluxpres@neon.local.gd \
chmod a+x /home/fluxpres/git/neonflux.git/hooks/post-receive
Set up a Quarkus project¶
Create a repository for code:
%%bash
ssh -i ~/.ssh/fluxpres -p 31022 fluxpres@neon.local.gd \
git init --bare git/neoncode.git
%%bash
git clone ssh://fluxpres@neon.local.gd:31022/home/fluxpres/git/neoncode.git $HOME/scrap/work/neoncode
pushd $HOME/scrap/work/neoncode/
git branch -M main
echo "# Basis for a project" > README.md
git add README.md
git commit -a -m "Initial commit"
git push
Quarkus initialization¶
%%bash
pushd $HOME/scrap/work/neoncode
quarkus create app --dry-run no.scienta:neoncode
quarkus create app no.scienta:neoncode
find neoncode -maxdepth 1 -exec mv {} . \;
# (Ignore error above)
rmdir neoncode
quarkus ext add io.quarkus:quarkus-kubernetes
quarkus ext add io.quarkus:quarkus-micrometer-registry-prometheus
quarkus ext add io.quarkus:quarkus-container-image-jib
./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-smallrye-health"
git add .
git commit -a -m "Quarkus project created"
git push
popd
Add application properties¶
%%bash
cp quarkus-app.properties $HOME/scrap/work/neoncode/src/main/resources/application.properties
cp .gitignore $HOME/scrap/work/neoncode/.gitignore
pushd $HOME/scrap/work/neoncode
git commit -a -m "Application configration added"
git push
popd
Run the Quarkus project¶
Build and run it locally:
%%bash
pushd $HOME/scrap/work/neoncode
quarkus dev
Call it:
%%bash
curl -s http://localhost:8080/hello
Next:
- Activate it in the cluster
- Automatic image-update
Build with Tekton...¶
The Tekton pipeline needs to be able to pull code from the repository too:
%%bash
cd ~/scrap/work/neonflux/
export LINE=$( cat $HOME/.ssh/fluxpres | base64 )
gsed -i 's|\(id_rsa:\).*|\1 '"$LINE"'|g' flux/tenant/neon-builder/git-credentials.yaml
git commit flux -m "Update credentials"
git push
NB : On Linux, use "base64 -w 0" when encoding
Tekton elements¶
element | description |
---|---|
step | an actual action, for instance compiling the code |
task | collection of steps |
taskrun | run and execute a task. Useful for debugging |
pipeline | run one or more tasks in sequence |
pipelinerun | execute a pipeline |
Annoyingly missing code completion in Intellij, as CRD does not define explanation fields. Redhat has a plugin which supports Tekton, but it does not play nice with the existing Kubernetes plugin - at least not in my experience.
Definitions are somewhat verbose, but luckily one does not set up too many different types of build pipelines in the same project.
Tekton has a dashboard¶
(Cannot be started through jupyter as it needs to reside in the background.)
screen -m -d -S tekton \
kubectl --namespace tekton-pipelines port-forward svc/tekton-dashboard 9097:9097
Then open http://localhost:9097/
For this presentation, Tekton is exposed here: http://tekton.local.gd:31090/
Build with tekton... webhook¶
Create a git webhook. Copy the script to the server side hook directory:
%%bash
scp -i ~/.ssh/fluxpres -P 31022 post-recieve.sh \
fluxpres@neon.local.gd:/home/fluxpres/git/neoncode.git/hooks/post-receive
ssh -i ~/.ssh/fluxpres -p 31022 fluxpres@neon.local.gd \
chmod a+x /home/fluxpres/git/neoncode.git/hooks/post-receive
This will trigger upon change of code. You can also induce manual run with a Kubernetes definition:
%%bash
kubectl create -n neon-builder -f flux/tenant/neon-builder/neon-pipeline-run.yaml
or by calling the trigger endpoint manually:
%%bash
kubectl run curl -i --rm --restart=Never --image gcr.io/cloud-builders/curl -- \
-XPOST -d '{"reponame": "neoncode"}' \
http://el-neon-listener.neon-builder:8080
Build with tekton... : status¶
%%bash
kubectl config set-context --current --namespace=neon-builder
kubectl get pods
%%bash
tkn pipeline list
%%bash
tkn pipelinerun list
%%bash
tkn pipelinerun logs neon-build-run-$HASH
%%bash
docker pull localhost:5001/scienta/neon:1.0.0-SNAPSHOT
Build with tekton... : feeling¶
- Tekton is rather verbose
- As build is done by Kubernetes objects, pipeline runs remains as cruft as you need / want to see logs
- The maturity level is not impressive
- Github actions is easier to grasp, and you can still build inside the cluster
- In any case: You want to consider a separate build cluster for security reasons
- Could be relevant in an air-gapped cluster
Build with tekton... : SLSA¶
Supply chain security: https://tekton.dev/docs/getting-started/supply-chain-security/
Run application... : app¶
%%bash
cd ~/scrap/work/neoncode
./mvnw clean package
cd target/kubernetes
kustomize create --autodetect .
mkdir -p ~/scrap/work/neonflux/base/neoncode
rm -f ~/scrap/work/neonflux/base/neoncode/*.yaml
kubectl kustomize . -o ~/scrap/work/neonflux/base/neoncode
cd ~/scrap/work/neonflux/base/neoncode
git checkout -- prometheus-k8s-role.yaml prometheus-roleBinding.yaml
kustomize create --autodetect .
cd ..
git add neoncode
git commit neoncode -m "Add / update neon"
git push
Run application... : base¶
%%bash
kubectl port-forward -n backend svc/neoncode 8080:80
Release Quarkus image locally¶
For some reaons I had trouble getting a Tekton release process running, so I have to do that manually. Add the following to the pom:
<scm>
<connection>scm:git:ssh://fluxpres@neon.local.gd:31022/home/fluxpres/git/neoncode.git</connection>
<developerConnection>scm:git:ssh://fluxpres@neon.local.gd:31022/home/fluxpres/git/neoncode.git</developerConnection>
<tag>HEAD</tag>
</scm>
Commit and push, then:
./mvnw --batch-mode -Darguments="-Dmaven.test.skip=true" -DpreparationGoals="clean install" release:prepare
./mvnw -Darguments="-Dmaven.test.skip=true \
-Dmaven.javadoc.skip=true \
-Dquarkus.container-image.build=true \
-Dquarkus.container-image.insecure=true \
-Dquarkus.container-image.push=true \
-Dquarkus.container-image.registry=neon.local.gd:5001" -Dgoals=package \
release:perform
Metrics¶
Prometheus is configured using
In order to keep the port-forwarding, I use screen. Have to start it manually, as jupyter does not want to run applications in the background.
screen -d -m -S prometheus kubectl port-forward -n monitoring svc/prometheus-k8s 9090
screen -d -m -S grafana kubectl port-forward -n monitoring svc/grafana 3000
For convenience: http://neon.local.gd:31090/
Alerts¶
You find alerts at the Prometheus endpoint. With default configuration you get more than 100 useful alerts, which you can set up to trigger a message in a slack channel, or some other form for alert.
Erlends personal project mcalert creates a notification icon in the system tray which will show the status of your system.
Future direction¶
Which things do you need in a real development team? You want:
- something which can scan your images for vulnerabilities, for example (Kubeclarity)[https://github.com/openclarity/kubeclarity/]
- to make your secrets more secret
- as this is a local example, all secrets are pushed unecrypted into git. That is a no-no, of course
- repository secrets might be just created as part of the boot strap process and not stored in git
- general secrets should be stored in an external vault
- You want dashboards for your application health
- For security, you want to host your own docker proxy and maven repo (or in other ways limit what you can fetch online
- A service mesh in order to control traffic to your cluster better
There are undoubtedly many more things to consider too, depending on your plans, and scale.
Cleanup¶
To remove the kind cluster and what that has been set up in docker:
%%bash
kind delete cluster --name=neonpandora
docker stop kind-registry
docker rm kind-registry