With the recent release of Camunda 8.8, we continue to strengthen our commitment to robust, production-ready deployments with solid foundations. In our previous blog post about Helm sub-chart changes, we explained how Bitnami’s shift in container image distribution led us to disable infrastructure sub-charts by default, starting with Camunda 8.8.
This blog post highlights key insights from the official Camunda documentation. For the full guide and more in-depth details, check out the complete documentation.
Why official vendor methods matter for Camunda 8.8
As outlined in our August announcement, Camunda 8.8 reinforces our strategy of building deployments on solid foundations—primarily managed PostgreSQL and Elasticsearch services, along with external OIDC providers. However, we understand that these managed infrastructure components aren’t always available in your organization’s service catalog.
That’s why in this blog post, we’ll show you how to integrate these infrastructure components using official deployment methods that don’t depend on Bitnami sub-charts. Instead, we’ll use vendor-supported deployment approaches—the recommended way to deploy and manage these services in production environments.
Using official vendor-supported methods provides several advantages:
- Vendor maintenance: Each deployment method is maintained by the respective project team
- Production-grade features: Built-in backup, monitoring, and scaling capabilities
- Enterprise support: Official support channels and documentation is available
- Security-focused methods: Regular updates and CVE patches from upstream maintainers
Elasticsearch with Elastic Cloud on Kubernetes (ECK)
Elastic Cloud on Kubernetes (ECK) is the official Kubernetes deployment method for Elasticsearch, maintained by Elastic. ECK provides the vendor-recommended approach for deploying Elasticsearch in Kubernetes environments, automatically handling cluster deployment, scaling, upgrades, and security configuration.
For Camunda 8.8, we target Elasticsearch 8.19+, as documented in our supported environments guide.
Installation steps
First, create the Elasticsearch cluster configuration file as elasticsearch-cluster.yml.
Source: elasticsearch/elasticsearch-cluster.yml
---
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
spec:
# renovate: datasource=docker depName=docker.elastic.co/elasticsearch/elasticsearch
version: 8.19.4
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: masters
count: 3
config:
# yamllint disable-line
node.store.allow_mmap: 'false'
# Disable deprecation warnings - https://github.com/camunda/camunda/issues/26285
# yamllint disable-line
logger.org.elasticsearch.deprecation: 'off'
podTemplate:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: elasticsearch.k8s.elastic.co/cluster-name
operator: In
values:
- elasticsearch
topologyKey: kubernetes.io/hostname
containers:
- name: elasticsearch
securityContext:
readOnlyRootFilesystem: true
env:
- name: ELASTICSEARCH_ENABLE_REST_TLS
value: 'false'
- name: READINESS_PROBE_TIMEOUT
value: '300'
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 2
memory: 2Gi
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 64Gi
Next, execute the deployment script.
Source: elasticsearch/deploy.sh
#!/bin/bash
# elasticsearch/deploy.sh - Deploy Elasticsearch via ECK operator
set -euo pipefail
# Variables
CAMUNDA_NAMESPACE=${CAMUNDA_NAMESPACE:-camunda}
OPERATOR_NAMESPACE=${1:-elastic-system}
# renovate: datasource=github-releases depName=elastic/cloud-on-k8s
ECK_VERSION="3.1.0"
# Install ECK operator CRDs
kubectl apply --server-side -f \
"https://download.elastic.co/downloads/eck/${ECK_VERSION}/crds.yaml"
# Create operator namespace if needed
kubectl create namespace "$OPERATOR_NAMESPACE" --dry-run=client -o yaml | kubectl apply -f -
# Install ECK operator
kubectl apply -n "$OPERATOR_NAMESPACE" --server-side -f \
"https://download.elastic.co/downloads/eck/${ECK_VERSION}/operator.yaml"
echo "ECK operator deployed in namespace: $OPERATOR_NAMESPACE"
# Wait for operator to be ready
kubectl wait --for=jsonpath='{.status.readyReplicas}'=1 --timeout=300s statefulset/elastic-operator -n "$OPERATOR_NAMESPACE"
# Deploy Elasticsearch cluster
kubectl apply -f "elasticsearch-cluster.yml" -n "$CAMUNDA_NAMESPACE"
# Wait for Elasticsearch cluster to be ready
kubectl wait --for=jsonpath='{.status.phase}'=Ready --timeout=600s elasticsearch --all -n "$CAMUNDA_NAMESPACE"
echo "Elasticsearch deployment completed in namespace: $CAMUNDA_NAMESPACE"
This script:
- Installs ECK CRDs and controller components
- Deploys the Elasticsearch cluster using the configuration above
- Waits for the cluster to become ready
Integration with Camunda Helm Chart
ECK authenticates automatically using its credentials, enabling seamless integration with Camunda through auto-generated secrets provided by ECK.
Create the associated Camunda configuration as camunda-elastic-values.yml.
Source: elasticsearch/camunda-elastic-values.yml
---
# Camunda Platform 8 Values - ECK snippet
# This configuration uses external services managed by Kubernetes operators:
# - Elasticsearch: ECK operator with master-only cluster configuration (uses 'elasticsearch-es-elastic-user' secret)
#
# Authentication Secrets:
# - elasticsearch-es-elastic-user: Contains 'elastic' user password for Elasticsearch
# Global configuration
global:
# Elasticsearch configuration
elasticsearch:
enabled: true
external: true
# URL configuration
url:
protocol: http
host: elasticsearch-es-http
port: 9200
# Authentication using ECK-generated secret
auth:
username: elastic
secret:
existingSecret: elasticsearch-es-elastic-user
existingSecretKey: elastic
elasticsearch:
enabled: false
This configuration tells Camunda to:
- Use the auto-generated
elasticuser credentials - Connect via HTTP with ECK-managed user
- Reference the
elasticsearch-es-elastic-usersecret created by ECK
PostgreSQL with CloudNativePG
CloudNativePG is a CNCF project that provides the official Kubernetes deployment method for PostgreSQL. It is the vendor-recommended approach for cloud-native PostgreSQL deployments. It’s designed specifically for production environments with enterprise-grade features like automated backups, point-in-time recovery, and rolling updates.
Our setup provisions three separate PostgreSQL clusters for different Camunda components, all targeting PostgreSQL 17 (which is the common denominator across current Camunda requirements). Each component gets its own dedicated PostgreSQL cluster:
pg-identity: For Camunda Identity componentpg-keycloak: For Keycloak Identity servicepg-webmodeler: For WebModeler component
Note: If you don’t plan to use certain components (for example, WebModeler), you can simply remove the corresponding cluster definition from the postgresql-clusters.yml manifest before deployment. This approach allows you to deploy only the PostgreSQL clusters you actually need, reducing resource consumption.
Installation steps
First, create the PostgreSQL cluster configuration file as postgresql-clusters.yml.
Source: postgresql/postgresql-clusters.yml
---
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: pg-identity
spec:
instances: 1
description: PostgreSQL cluster for Camunda Identity
# renovate: datasource=docker depName=ghcr.io/cloudnative-pg/postgresql versioning=docker
imageName: ghcr.io/cloudnative-pg/postgresql:17.5
storage:
size: 15Gi
superuserSecret:
name: pg-identity-superuser-secret
# OpenShift Security Context - let SCC manage the security context
seccompProfile:
type: RuntimeDefault
bootstrap:
initdb:
database: identity
owner: identity
dataChecksums: true
secret:
name: pg-identity-secret
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: pg-keycloak
spec:
instances: 1
description: PostgreSQL cluster for Keycloak
# renovate: datasource=docker depName=ghcr.io/cloudnative-pg/postgresql versioning=docker
imageName: ghcr.io/cloudnative-pg/postgresql:17.5
storage:
size: 15Gi
superuserSecret:
name: pg-keycloak-superuser-secret
# OpenShift Security Context - let SCC manage the security context
seccompProfile:
type: RuntimeDefault
postgresql:
parameters:
lock_timeout: 30s
statement_timeout: '0'
bootstrap:
initdb:
database: keycloak
owner: keycloak
dataChecksums: true
secret:
name: pg-keycloak-secret
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: pg-webmodeler
spec:
instances: 1
description: PostgreSQL cluster for Webmodeler
# renovate: datasource=docker depName=ghcr.io/cloudnative-pg/postgresql versioning=docker
imageName: ghcr.io/cloudnative-pg/postgresql:17.5
superuserSecret:
name: pg-webmodeler-superuser-secret
# OpenShift Security Context - let SCC manage the security context
seccompProfile:
type: RuntimeDefault
bootstrap:
initdb:
database: webmodeler
owner: webmodeler
secret:
name: pg-webmodeler-secret
storage:
size: 15Gi
You’ll also need the secrets configuration script.
Source: postgresql/set-secrets.sh
wget https://raw.githubusercontent.com/camunda/camunda-deployment-references/refs/heads/feature/operator-playground/generic/kubernetes/operator-based/postgresql/set-secrets.sh
chmod +x set-secrets.sh
Next, execute the deployment script.
Source: postgresql/deploy.sh
#!/bin/bash
# postgresql/deploy.sh - Deploy PostgreSQL via CloudNativePG operator
set -euo pipefail
# Variables
NAMESPACE=${NAMESPACE:-camunda}
OPERATOR_NAMESPACE=${1:-cnpg-system}
# renovate: datasource=github-releases depName=cloudnative-pg/cloudnative-pg
CNPG_VERSION="1.27.0"
# Install CloudNativePG operator CRDs and operator
kubectl apply -n "$OPERATOR_NAMESPACE" --server-side -f \
"https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-${CNPG_VERSION%.*}/releases/cnpg-${CNPG_VERSION}.yaml"
# Wait for operator to be ready
kubectl rollout status deployment \
-n "$OPERATOR_NAMESPACE" cnpg-controller-manager \
--timeout=300s
echo "CloudNativePG operator deployed in namespace: $OPERATOR_NAMESPACE"
# Create PostgreSQL secrets
NAMESPACE="$NAMESPACE" "./set-secrets.sh"
# Deploy PostgreSQL
kubectl apply --server-side -f "postgresql-clusters.yml" -n "$NAMESPACE"
# Wait for PostgreSQL cluster to be ready
kubectl wait --for=condition=Ready --timeout=600s cluster --all -n "$NAMESPACE"
echo "PostgreSQL deployment completed in namespace: $NAMESPACE"
This script:
1. Installs the CloudNativePG management components
2. Creates authentication secrets for each database
3. Deploys the three PostgreSQL clusters using the configuration above
4. Waits for all clusters to become ready
OpenShift environments require a slightly modified script.
Integration with Camunda Helm Chart
PostgreSQL integration is configured through separate values files for each component.
For Camunda Identity, create camunda-identity-values.yml.
Source: postgresql/camunda-identity-values.yml
---
# Camunda Platform 8 Values - CNPG Identity Snippet
# This configuration uses external services managed by Kubernetes operators:
# - PostgreSQL: CloudNativePG operator with a dedicated cluster for Identity
#
# Authentication Secrets:
# - pg-identity-secret: Contains PostgreSQL user credentials for Identity
identity:
enabled: true
# Use external PostgreSQL
externalDatabase:
enabled: true
host: pg-identity-rw
port: 5432
database: identity
username: '' # Will be read from secret
# Use the new secret configuration format
secret:
existingSecret: pg-identity-secret
existingSecretKey: password
For WebModeler, create camunda-webmodeler-values.yml.
Source: postgresql/camunda-webmodeler-values.yml
---
# Camunda Platform 8 Values - CNPG Webmodeler Snippet
# This configuration uses external services managed by Kubernetes operators:
# - PostgreSQL: CloudNativePG operator with a dedicated cluster for WebModeler
#
# Authentication Secrets:
# - pg-webmodeler-secret: Contains PostgreSQL user credentials for WebModeler
# WebModeler configuration
webModeler:
enabled: true
# Use external PostgreSQL for WebModeler
restapi:
externalDatabase:
enabled: true
host: pg-webmodeler-rw
port: 5432
database: webmodeler
user: webmodeler
secret:
existingSecret: pg-webmodeler-secret
existingSecretKey: password
# Mail configuration (required)
mail:
fromAddress: noreply@camunda.local
fromName: Camunda Platform
webModelerPostgresql:
enabled: false
This Camunda Helm Chart configuration consumes the database secrets that are generated beforehand by the script set-secrets.sh. This ensures secure communication between Camunda components and their respective databases.
Alternatively, you can create these secrets manually by following the official CloudNativePG bootstrap guide.
Keycloak with the official Keycloak deployment method
The Keycloak deployment for Kubernetes is the official vendor-supported way to deploy and manage Keycloak instances on Kubernetes. Maintained by the Keycloak team, it provides the recommended approach for automated deployment, configuration, and lifecycle management.
We target Keycloak 26+, as specified in our supported environments documentation.
Installation steps
First, create the Keycloak instance configuration file as keycloak-instance-no-domain.yml.
Source: keycloak/keycloak-instance-no-domain.yml
---
apiVersion: k8s.keycloak.org/v2alpha1
kind: Keycloak
metadata:
name: keycloak
spec:
# renovate: datasource=docker depName=camunda/keycloak versioning=regex:^quay-optimized-(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)$
image: docker.io/camunda/keycloak:quay-optimized-26.3.2
instances: 1
db:
# Use official url parameter instead of individual host/port/database
# pg-keycloak-rw service is provided by the CNPG keycloak
# aws-wrapper is required for optimized images (no impact on functionality)
url: jdbc:aws-wrapper:postgresql://pg-keycloak-rw:5432/keycloak
schema: public
usernameSecret:
name: pg-keycloak-secret
key: username
passwordSecret:
name: pg-keycloak-secret
key: password
http:
httpEnabled: true
additionalOptions:
- name: http-relative-path
value: /auth
ingress:
enabled: false
hostname:
# use the service name as hostname to sign the tokens correctly
hostname: keycloak-service
strict: false
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 512Mi
Next, execute the deployment script.
Source: keycloak/deploy.sh
#!/bin/bash
# keycloak/deploy.sh - Deploy Keycloak via Keycloak operator (requires PostgresSQL)
set -euo pipefail
# Variables
CAMUNDA_NAMESPACE=${CAMUNDA_NAMESPACE:-camunda}
KEYCLOAK_CONFIG_FILE=${KEYCLOAK_CONFIG_FILE:-"keycloak-instance-no-domain.yml"}
# renovate: datasource=github-releases depName=keycloak/keycloak
KEYCLOAK_VERSION="26.4.0"
# Install Keycloak operator CRDs
kubectl apply --server-side -f \
"https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/${KEYCLOAK_VERSION}/kubernetes/keycloaks.k8s.keycloak.org-v1.yml"
kubectl apply --server-side -f \
"https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/${KEYCLOAK_VERSION}/kubernetes/keycloakrealmimports.k8s.keycloak.org-v1.yml"
# Install Keycloak operator
kubectl apply -n "$CAMUNDA_NAMESPACE" --server-side -f \
"https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/${KEYCLOAK_VERSION}/kubernetes/kubernetes.yml"
# Wait for operator to be ready
kubectl wait --for=condition=available --timeout=300s deployment/keycloak-operator -n "$CAMUNDA_NAMESPACE"
echo "Keycloak operator deployed in namespace: $CAMUNDA_NAMESPACE"
# Deploy Keycloak
# Deploy Keycloak with variable substitution via envsubst
envsubst < "$KEYCLOAK_CONFIG_FILE" | kubectl apply -f - -n "$CAMUNDA_NAMESPACE"
# Wait for Keycloak instance to be ready
kubectl wait --for=condition=Ready --timeout=600s keycloak --all -n "$CAMUNDA_NAMESPACE"
echo "Keycloak deployment completed in namespace: $CAMUNDA_NAMESPACE"
This script:
1. Installs Keycloak CRDs and controller components
2. Deploys the Keycloak instance using the configuration above
3. Configures the instance to serve under the /auth path prefix
4. Waits for the instance to become ready
Additional configuration options
Different manifests are available for various deployment topologies:
- For Nginx ingress with custom domains: keycloak-instance-domain-nginx.yml
- For OpenShift with route configuration: keycloak-instance-domain-openshift.yml
Integration with Camunda Helm Chart
Keycloak integration also varies based on your domain configuration:
- For domain-based deployments: camunda-keycloak-domain-values.yml
- For local/port-forward access: camunda-keycloak-no-domain-values.yml
The Keycloak deployment automatically creates admin credentials and configures the Keycloak realm for Camunda integration.
Putting it all together: Complete Camunda 8.8 deployment
Once you have deployed the infrastructure components using the official vendor methods above, you can install Camunda 8.8. Combine the configuration files as described in the helm documentation.
Deploy Camunda Platform
Deploy Camunda using Helm with the vendor-supported infrastructure configurations.
helm install camunda camunda/camunda-platform --version 13.0.0 \
--values camunda-identity-values.yml \
--values camunda-webmodeler-values.yml \
--values camunda-elastic-values.yml \
--values camunda-keycloak-no-domain-values.yml
This approach gives you:
- Production-ready infrastructure managed by official vendor-supported methods
- Security by default with auto-generated credentials
- Operational excellence with built-in monitoring, backup, and scaling capabilities
- Future-proof architecture that doesn’t depend on deprecated Bitnami sub-charts
What’s next?
We’re committed to making this transition as smooth as possible. In the coming months, expect:
- Enhanced migration guides for existing deployments using Bitnami sub-charts
This vendor-supported approach will become recommended for cloud-native Camunda deployments when running infrastructure services in a self-managed environment. By leveraging the expertise of upstream maintainers and following industry best practices, you’ll have a more robust, secure, and maintainable Camunda 8 installation.
For more information, please refer to the documentation or contact us.
Start the discussion at forum.camunda.io