Identity infrastructure, simplified for you.
Learn more about ZITADEL by checking out the source repository on GitHub
By default, this chart installs a highly available ZITADEL deployment.
Either follow the guide for deploying ZITADEL on Kubernetes or follow one of the example guides:
Now, you have the flexibility to define resource requests and limits separately for the machineKeyWriter, distinct from the setupJob. If you don’t specify resource requests and limits for the machineKeyWriter, it will automatically inherit the values used by the setupJob.
To maintain consistency in the structure of the values.yaml file, certain properties have been renamed. If you are using any of the following properties, kindly review the updated names and adjust the values accordingly:
Old Value | New Value |
---|---|
setupJob.machinekeyWriterImage.repository |
setupJob.machinekeyWriter.image.repository |
setupJob.machinekeyWriterImage.tag |
setupJob.machinekeyWriter.image.tag |
CockroachDB is not in the default configuration anymore. If you use CockroachDB, please check the host and ssl mode in your ZITADEL Database configuration section.
The properties for database certificates are renamed and the defaults are removed. If you use one of the following properties, please check the new names and set the values accordingly:
Old Value | New Value |
---|---|
zitadel.dbSslRootCrt |
zitadel.dbSslCaCrt |
zitadel.dbSslRootCrtSecret |
zitadel.dbSslCaCrtSecret |
zitadel.dbSslClientCrtSecret |
zitadel.dbSslAdminCrtSecret |
- |
zitadel.dbSslUserCrtSecret |
The ZITADEL chart uses Helm hooks, which are not garbage collected by helm uninstall, yet. Therefore, to also remove hooks installed by the ZITADEL Helm chart, delete them manually:
helm uninstall my-zitadel
for k8sresourcetype in job configmap secret rolebinding role serviceaccount; do
kubectl delete $k8sresourcetype --selector app.kubernetes.io/name=zitadel,app.kubernetes.io/managed-by=Helm
done
For troubleshooting, you can deploy a debug pod by setting the zitadel.debug.enabled
property to true
.
You can then use this pod to inspect the ZITADEL configuration and run zitadel commands using the zitadel binary.
For more information, print the debug pods logs using something like the following command:
kubectl logs rs/my-zitadel-debug
If you see this error message in the logs of the setup job, you need to reset the last migration step once you resolved the issue. To do so, start a debug pod and run something like the following command:
kubectl exec -it my-zitadel-debug -- zitadel setup cleanup --config /config/zitadel-config-yaml
Lint the chart:
docker run -it --network host --workdir=/data --rm --volume $(pwd):/data quay.io/helmpack/chart-testing:v3.5.0 ct lint --charts charts/zitadel --target-branch main
Test the chart:
# Create a local Kubernetes cluster
kind create cluster --image kindest/node:v1.27.2
# Test the chart
go test ./...
Watch the Kubernetes pods if you want to see progress.
kubectl get pods --all-namespaces --watch
# Or if you have the watch binary installed
watch -n .1 "kubectl get pods --all-namespaces"