メインコンテンツまでスキップ
バージョン: DAI 25.4

Deploying Eggplant DAI in Containers

This page describes how to deploy Eggplant DAI in a Kubernetes environment. It includes system and software requirements specific to Kubernetes deployments.

ヒント

Before proceeding with the installation of Eggplant DAI in containers, you should ensure engineers in your organisation are Certified Kubernetes Administrators (https://www.cncf.io/training/certification/cka/) or have equivalent experience.

Software Recommendations for Eggplant DAI Deployments with Kubernetes

Eggplant DAI can be installed on Kubernetes using Helm. You will need to meet the following requirements:

RequirementNotes
Kubernetes clusterTested version 1.32.
ingress-nginxTested version 1.11.5 (chart version 4.11.5).
Keda v2Optional, for autoscaling engines. Tested version 2.17.2.
Eggplant DAI licenseSpeak to support if needed.

Create Custom Values File

The Eggplant DAI installation can be customised by passing in custom values to the Helm installation. In the example below, substitute all the values to work for your own deployment.

dai.yaml
global:
# Please omit the enclosing square brackets when you specify
# your licenses below. Including them can cause issues with
# the yaml file formatting.
devLicense: a-real-license-goes-here
execLicense: a-real-license-goes-here
featLicenses: comma-separated-feature-licenses-go-here

ingress:
host: dai.example.com

keycloak:
host: kc-dai.example.com
user: kcadmin
password: kcadminpassword

objectStorage:
minio:
rootUser: eggplant
rootPassword: eggplant

postgresql:
auth:
postgresPassword: postgres

keycloak:
auth:
# note adminUser and adminPassword must match global.keycloak.user and global.keycloak.password
adminUser: kcadmin
adminPassword: kcadminpassword
externalDatabase:
# This must match the value of global.postgresql.auth.postgresPassword
password: postgres

keycloak-user-provisioner:
adminUsers:
daiAdmin:
username: admin-username
password: admin-password
email: admin-email

A few notes:

  • global.ingress.host and global.keycloak.host do not have to be the same domain, but they do have to be resolvable. You can do this either by having something like ExternalDNS deployed on your cluster or manually creating the records and pointing them at your cluster.
  • When running in containers DAI must be used in conjunction with TLS. The TLS can either be terminated within the cluster by adding a certificate to the ingress or on an external loadbalancer. See the TLS configuration section below for details.
  • The hostname configured under global.ingress.host must be solely for DAI use. Running other applications on the same subdomain is not supported.
  • keycloak-user-provisioner.adminUsers.daiAdmin.password must be at least 12 characters long. You can add additional admin users by adding extra keys under keycloak-user-provisioner.adminUsers.
  • DAI makes use of configuration snippets within the ingress rules. If you are running a recent version of the ingress-nginx controller, then you must ensure that this is configured to allow-snippet-annotations. This can be achieved by setting controller.allowSnippetAnnotations to true if installing ingress-nginx with helm.

Full documentation for all values can be found here.

Deploy Eggplant DAI with Kubernetes

  1. Download the required software. Refer to the Software Requirements table above for the list of what you need.

  2. Create a new namespace to install Eggplant DAI:

    kubectl create ns dai
  3. Create a custom values file with your preferred configuration (see Create Custom Values File) and save as dai.yaml

  4. Deploy Eggplant DAI with the default configuration using the command below.

    helm upgrade --install \
    --namespace dai \
    --create-namespace dai \
    oci://harbor.dai.eggplant.cloud/charts/dai \
    --version 1.34.3 \
    --values dai.yaml \
    --wait

    On success you will see output similar to this:

    Release "dai" does not exist. Installing it now.
    NAME: dai
    LAST DEPLOYED: Fri Feb 17 08:20:17 2023
    NAMESPACE: dai
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Thank you for installing dai.

    Your release is named dai.

    To learn more about the release, try:

    $ helm status dai
    $ helm get all dai

    You can access your DAI instance by visiting dai.example.com.

    admin username: admin-username
    admin password: admin-password
    警告

    The Helm chart installs these dependencies, but it doesn't manage backups of data stored in PostgreSQL or MinIO. You need to arrange backups of these services in a production deployment as part of your disaster recovery plan. There's an example of one approach to backups later in this documentation.

Deploy Eggplant DAI with OpenShift

As of version 25.3, you can install DAI into an OpenShift-based Kubernetes cluster by following the instructions on this page. No additional changes to values are required. Before you install DAI, please ensure you have the following:

  • The Kubernetes ingress-nginx controller is installed (if the configured IngressClass is anything other than nginx then this will need configuring Ingress Class below).
  • An OpenShift project is available, with Administrator access to the project.
ヒント

DAI requires the Kubernetes ingress-nginx controller to manage inbound traffic. This is because the Ingress definitions rely on annotations specific to that controller.

Other controllers — such as OpenShift Routes or the F5 NGINX Ingress Controller — are not compatible and will not function correctly.

If you're running on OpenShift, you must install the community-maintained ingress-nginx controller. This may require elevated, cluster-wide permissions. Please consult your cluster administrator if needed.

For installation instructions and manifests, refer to the official project on GitHub: https://github.com/kubernetes/ingress-nginx

To install DAI first ensure that you are logged into OpenShift and have the correct project selected. The installation is then broadly the same as any other cluster, with the exception of omitting the --create-namespace flag, as the namespace should already exist as part of the project:

oc login https:///<cluster_hostname>:6443 -u <username>
oc project <project_name>
helm upgrade --install dai \
oci://harbor.dai.eggplant.cloud/charts/dai \
--version 1.34.3 \
--values dai.yaml \
--wait

On success you will see output similar to this:

Release "dai" does not exist. Installing it now.
NAME: dai
LAST DEPLOYED: Fri Jul 4 12:28:05 2025
NAMESPACE: dai
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing dai.

Your release is named dai.

To learn more about the release, try:

$ helm status dai
$ helm get all dai

You can access your DAI instance by visiting dai.example.com.

admin username: admin-username
admin password: admin-password

Supported customizations

In the default installation above, all dependencies are deployed to Kubernetes with data stored in persistent volumes for PostgreSQL and MinIO. If you have an existing solution in place for PostgreSQL or AWS S3 compatible object storage you want to use instead, you can customise the Eggplant DAI installation to use these. Further, you may want to pass credentials using Kubernetes secrets rather than in the values file for improved security.

This section of the documentation gives examples showing how to customize your installation. All examples will use secrets for credentials. All the examples given are just snippets that are meant to be added to the default installation values demonstrated above.

Ingress Class

DAI requires the Kubernetes ingress-nginx controller to manage inbound traffic. This is because the Ingress definitions rely on annotations specific to that controller. DAI defaults to using an ingress class called nginx. If you wish to use a different ingress class then you can override this in the values by setting global.ingress.className to your preferred value within dai.yaml.

Object storage configuration

Eggplant DAI depends on an S3 compatible object storage solution for persisting assets such as test screenshots. The Helm chart gives several options for configuring this.

Bundled MinIO (Default)

By default, the Eggplant DAI Helm chart deploys MinIO as a sub-chart with the root-user and root-password configured in the helm values (as per the create-custom-values-file section).

If you would prefer to not set these within the values, provide them as an existing secret. First, prepare an existing secret with credentials in:

dai-objectstorage.yaml
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-objectstorage
stringData:
rootUser: username
rootPassword: password
kubectl --namespace dai apply -f dai-objectstorage.yaml

Then update your values file to point to the existing secret and run Helm upgrade:

dai.yaml
global:
objectStorage:
minio:
existingSecret: dai-objectstorage

minio:
existingSecret: dai-objectstorage
helm upgrade dai oci://harbor.dai.eggplant.cloud/charts/dai --version 1.34.3 -f dai.yaml --wait

Note global.objectStorage.minio.existingSecret and minio.auth.existingSecret must match.

警告

Changes to the MinIO configuration will not be supported by Eggplant.

Existing MinIO

If you have an existing MinIO installation you can use this instead as follows, using the same secret created above.

dai.yaml
global:
objectStorage:
minio:
existingSecret: dai-objectstorage
endpoint: my.minio.deployment.example.com

minio:
enabled: false

Note the minio key setting enabled to false. This prevents the bundled MinIO from being deployed.

警告

Eggplant cannot provide support for MinIO installations external to your DAI installation.

S3

AWS S3 can be configured for object storage with an existing secret as follows. First, prepare an existing secret with credentials in:

dai-objectstorage.yaml
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-objectstorage
stringData:
aws-access-key-id: my-access-key-id
aws-secret-access-key: my-secret-access-key
kubectl --namespace dai apply -f dai-objectstorage.yaml

Modify your values file to update or add the following keys as follows:

dai.yaml
global:
objectStorage:
provider: "aws"
aws:
existingSecret: dai-objectstorage
awsAccessKeyIdKey: aws-access-key-id
awsSecretAccessKeyKey: aws-secret-access-key
region: "eu-west-1"

minio:
enabled: false

Now you can deploy it to your cluster with Helm.

PostgreSQL

Eggplant DAI uses PostgreSQL for data storage. The Helm chart provides several options for configuring it.

Bundled PostgreSQL (Default)

By default, the Eggplant DAI Helm chart deploys PostgreSQL as a sub-chart with username and password both set to postgres.

To override this, create a secret with credentials in:

dai-postgres.yaml
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-postgres
stringData:
postgres-password: my-postgres-password

Modify your values file to update or add the following keys and apply it to your cluster with Helm:

dai.yaml
global:
postgresql:
auth:
existingSecret: dai-postgres

keycloak:
externalDatabase:
existingSecret: dai-postgres
existingSecretPasswordKey: postgres-password

Note keycloak.externalDatabase.existingSecretPasswordKey: by default, the Bitnami chart expects the existing secret to have the database password under the key db-password, but the PostgreSQL chart and DAI default to postgres-password as the key. You can either override the behaviour of the Keycloak chart, as above, or alternatively you could set global.postgresql.auth.secretKeys.adminPasswordKey.

If you override extraEnvVars, you need to ensure you also set the POSTGRESQL_DATABASE environment variable to keycloak. This creates the Keycloak database that is configured under the keycloak key in the default values.

警告

Eggplant cannot support changes to the PostgreSQL configuration.

Existing PostgreSQL

If you have an existing PostgreSQL database installation, or would like to use an external service like AWS RDS, you can do so. By default, we use the postgres user for installation. When using a custom PostgreSQL user, please ensure it has the SUPERUSER role, as certain operations, like creating databases or extensions, require elevated privileges.

Using the same existing secret as above, modify your values file to set the following keys:

dai.yaml
global:
postgresql:
host: my.postgresql.host.example.com
auth:
existingSecret: dai-postgres
username: youruser

keycloak:
externalDatabase:
existingSecret: dai-postgres
existingSecretPasswordKey: postgres-password
user: youruser
host: my.postgresql.host.example.com

postgresql:
enabled: false

Note that if you use an existing PostgreSQL deployment, you also need to update the Keycloak configuration to use this.

Engine scaling

The engine component of Eggplant DAI is used for test execution and report generation. As your DAI instance becomes busier, you will need to scale this component to handle greater test volumes. We recommend using Keda to manage this scaling.

To use Keda, first install it according to upstream instructions.

ヒント

Only Keda v2 is supported.

Then enable Keda by adding the following to your values file:

dai.yaml
ai-engine:
keda:
enabled: true

If you can't use Keda for some reason, you can manually manage the number of engine replicas by adding the following to your values file, increasing it as your instance becomes busier. If you are not using KEDA then we recommend you configure n + 1 engines, where n is the number of tests you wish to run in parallel.

dai.yaml
ai-engine:
replicaCount: 2

Keycloak

Eggplant DAI depends on Keycloak for authentication and authorization services. We bundle this as a sub-chart and do not currently support using your own Keycloak installation.

Configuring TLS

When running DAI in containers, the use of TLS certificates is mandatory. However, starting with DAI version 7.3 it is possible to use TLS certificates signed by a private certificate authority (see the Adding a custom TLS Certificate Authority (CA) section below). How you add TLS is up to you and is likely to depend on your infrastructure. This may include:

  • Adding the TLS certificates to a loadbalancer, external to the cluster and terminating the TLS connections there
  • Adding wildcard certificates to the ingress-nginx controller so that all ingress rules / hosts can share a common certificate
  • Adding a Kubernetes secret containing the TLS certificates to the DAI namespace and supplying the secret details to DAI via values.

The first two options will be dependent on your infrastructure and do not require DAI configuration changes. If you would like to add a TLS certificate to DAI you can:

  1. Obtain your certificate and key in PEM format. The certificate should include the full chain.

  2. Create the TLS in the DAI namespace. (note that you may have to create the namespace first).

    kubectl create secret tls tls-secret --cert=path/to/cert/file --key=path/to/key/file --namespace dai
  3. Update the ingress section of your DAI helm values file to add tls sections under global.ingress and global.keycloak as per below (update the hostname and secret name to suit your installation):

    dai.yaml
    global:
    ingress:
    host: dai.example.com
    tls:
    - hosts:
    - dai.example.com
    secretName: tls-secret
    keycloak:
    tls:
    secretName: tls-secret
  4. Complete the helm install as normal.

Adding a custom TLS Certificate Authority (CA)

The individual DAI services use Keycloak for authentication and authorization of requests. This requires that the services can reach the Keycloak endpoint, including validating the TLS certificates that have been configured. Each release of DAI ships with up-to-date CA certificates allowing most publicly signed certificates to be validated. If your DAI instance is unable to verify the configured TLS certificate, you can supply the associated CA via the helm values by setting the global.ingress.customCACert value.

The example snippet below shows a private CA certificate being added to the DAI. This would allow TLS signed by the CA to be used with DAI.

dai.yaml
global:
ingress:
host: dai.example.com
customCACert: |
-----BEGIN CERTIFICATE-----
MIIFYDCCA0igAwIBAgIJAL6zT1uUli/yMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX
aWRnaXRzIFB0eSBMdGQwHhcNMjQwNDEwMTQ0NTU3WhcNMjkwNDA5MTQ0NTU3WjBF
MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50
ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC
CgKCAgEAyP3ve0Y3Y9Wh0G6dxWGQSdOPzVRgzv2adUV763GQOiEnSy8Z44X9rYNS
s6Yd80H5hDuHFPPTHeQdOQ+BVFlUqKT1nsVzDwwy9eApHLRiv3CXKE749sqRiFiQ
PwEGxk2zjRqOFm2phFKY81ikdazjd9Kl9bfdALrmJk6vFCHV3bwIgOFlkl42nWT+
cAnjZEo4p9pejKyccC/43L/BiVZ3ILYKmAhtQFgtBfX0STlV1eqvr0tKOU1WcHdb
AkUoWgmymEAqKOTgF3mptjeg2a+lAaXPMD25vUI5OZ3Kn+kGoL2bRM/3ujQdwv+0
reSvah2+XmJHeaGA3Mr00IGJF85H9YQDgrU9/nvJQX0NyDjO9Z4qViROU40nCMrf
41cBmD9e5DZL5bSyhAiNlbq0MplIvm2ervlvou6ymPfVkQmdet5rGNe3+XOFbamc
c2j65tgUm6+KSoho7xi+3PvmRDwgplTKLavcdZMHxjVWp6gQm6PVOxrheII+ZuIS
Au3ixVaN4anPRlV02EvkBu8Hf67faL6sL65kTYkL98BSdpIepJkHEBONv9DuCH9g
5jiGHXPyBkMuga7uxF6OTVR/SHd+io0ookbUyfuSIWywBBHvDu0cJG8CmrdptzCv
YBGg7fjz8IOrawOdv/3VVEzixe+qVr3HFT+eO3MMf9eNCo7M7U8CAwEAAaNTMFEw
HQYDVR0OBBYEFKSpIbnsb6Fff2dbiJYxLrtiyoGKMB8GA1UdIwQYMBaAFKSpIbns
b6Fff2dbiJYxLrtiyoGKMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQAD
ggIBAJD/rb1XpIXchX58dG6AV+uPtZ7ek1kqJHTA6fHW7UchZrtXW27HCraPnZ9m
hN/iiXxyflPWQFpndAWA3Zzor3eKZMPek5DvXa5yosC/2/kRlw+GkIXhrJEKvOaM
tIOXgKMuwZyLsYV5PD1Y3J1q+jx58euiZtsQRIDM8K7BuzmMruvWGXzxT2Vm54ZS
6s597X2YKlSu/34+G9a/8N9OqiULp+k0QKz/DXOpWXk8q9u95Ga2WExfsSH8H+ZW
2swVt9shdH5/L7Jbpm9Kq1acyI9WPh9oXDsGIcoWh10zblgqajIMzYX22tHjJc7M
GcoLubn9sSkJgqP46IfOngmbH2Ik9mDylXl11LPgrw8XudMh/Uf5RkvsJX9tAmib
IHrM2n0tSQiuZA16suABvGsmQkdhBzhFnpZJfgmxVXnEmwU8k2cUByil09ZoKtYw
7YvMz+p+uFdl+uoOhWOLbfSftymMkeHNRdykiNCXqCjPzZgHVwHCCIPIDWpIbb7w
qkzoUw6dp1YOz4RO0ZsJ7zhztSsSGHinvbH1TnXzdCj1OjkP612/lRJB3NC99rbg
HFKMUeiXEtZbMtTYaBxJ9vWxtkbeVwaWZommZVjDk6kYFiUsShij9olN78JcZB6/
2TxFcGsToiq0hJQ19soqTKuBP79TCpeQpOQj/glYB8MZbU+R
-----END CERTIFICATE-----

Configuring Node Selectors

Node selectors can be added via the helm values to control which Kubernetes nodes the DAI services run on. You can set the Node Selector for all the DAI services by setting global.dai.nodeSelector in values. As DAI also uses third party components these have to be set separately within the values file. The example snippets below show the Node Selector can be set to use only nodes with the label node-pool=application for all DAI services and in the Postgresql, Rabbitmq, Minio and Keycloak sub-charts.

dai.yaml
global:
dai:
nodeSelector:
node-pool: application

postgresql:
nodeSelector:
node-pool: application

rabbitmq:
nodeSelector:
node-pool: application

minio:
nodeSelector:
node-pool: application

keycloak:
nodeSelector:
node-pool: application

Resources

DAI comes pre-configured with Kubernetes resource requests (for CPU and memory) and limits (For memory only) for each of the pod / services deployed. These are suited to a typical initial DAI deployment and should be considered the minimal requirement. In practise the actual resources required will vary based on the numbers of users, numbers of running tests and the complexity of DAI models.

As with the any Kubernetes application it is good practise to review the resource usage over time and tune to your resource requests / limits appropriately.

The resources for each service can be individually overridden with a standard Kubernetes resource block. E.g.

dai.yaml
service-name:
resources:
limits:
memory: 256Mi
requests:
cpu: 200m
memory: 256Mi

Refer to the full values documentation for more details.

ヒント

When determining the resource requests / limits we recommend that you:

  • Use a monitoring tool / stack that collects and graphs metrics to allow you see actual usage over time. (Simply looking at snapshots in time may give a false reading);
  • Bear in mind that monitor tools may miss peaks due to scrape intervals.
  • Be aware that an idle DAI instance may use very little resource but one running tests can have much greater requirements.
  • Monitor for both out of memory (OOM) events and CPU throttling.
    • OOM events will cause pod restarts and will interrupt running tests.
    • Any level of CPU throttling can cause intermittent issues within DAI and should be avoided. (Note that by default we do not set CPU limits at all to avoid this.)

Backup and restore

You must regularly back up configuration and results data from your DAI installation. Data that needs to be backed up is stored in PostgreSQL (as configured for DAI and Keycloak) and in object storage.

How you backup this data will depend on how you've configured your deployment, but here we provide an example of how both can be backed up in the default installation shown at the start of this document.

Backup and restore PostgreSQL

Eggplant DAI uses several databases to store its data, therefore we recommend using pg_dumpall in the default installation to ensure you backup all databases. If you're using a shared database instance, you need to ensure you backup the following databases:

  • execution_service
  • keycloak
  • sut_service
  • ttdb
  • vam

In the example below, we execute pg_dumpall directly in the default PostgreSQL pod. The result is then streamed to dai.dump file on the local computer:

POSTGRES_PASSWORD=$(kubectl get secret postgres --namespace dai -o json | jq '.data."postgres-password"' -r | base64 -d)

kubectl --namespace dai exec postgres-0 \
-- /bin/sh -c \
"export PGPASSWORD=$POSTGRES_PASSWORD && pg_dumpall --username postgres --clean" \
> dai.dump
警告

The command given here includes the --clean option. This causes pg_dumpall to include commands to drop the databases in the dump. This makes restoring it easier, but you should know this will happen.

In reality, you would likely want to:

  • compress the dump
  • put it on a backup storage server
  • execute on a schedule.

But the use of pg_dumpall would still stand.

To restore the backup, you can reverse the process as follows:

POSTGRES_PASSWORD=$(kubectl get secret postgres --namespace dai -o json | jq '.data."postgres-password"' -r | base64 -d)

kubectl --namespace dai exec -i postgres-0 -- /bin/sh -c \
"export PGPASSWORD=\$POSTGRES_PASSWORD && psql --username=postgres \
--dbname=postgres \
--file -" < dai.dump | tee restore.log

A few notes:

  • We used the --clean option when creating the dump. This means all databases in the backup will be dropped and recreated.
  • We specify --dbname postgres. As the backup was created with --clean, you'll get errors if you connect to one of the databases being dropped as part of the restore.

Backup and restore MinIO

Images and other assets are stored in object storage rather than the database. You must back these up in addition to the database content discussed above. A quick way to run this backup from your local machine is demonstrated below. The example below requires you to have the MinIO client tools installed locally.

ROOT_USER=$(kubectl --namespace dai get secret dai-objectstorage -o json | jq -r '.data."rootUser"' | base64 -d )
ROOT_PASSWORD=$(kubectl --namespace dai get secret dai-objectstorage -o json | jq -r '.data."rootPassword"' | base64 -d)
kubectl --namespace dai port-forward service/minio 9000:9000 &
PID=$!
mc alias set minio http://localhost:9000 $ROOT_USER $ROOT_PASSWORD -api S3v4
mkdir backup
mc cp --recursive --quiet minio/ backup/
kill $PID

As before, it's likely you'll want to compress the backup, move it to an appropriate storage server and execute it on a schedule.

To restore the backup, you can just reverse the copy command:

ROOT_USER=$(kubectl --namespace dai get secret dai-objectstorage -o json | jq -r '.data."rootUser"' | base64 -d )
ROOT_PASSWORD=$(kubectl --namespace dai get secret dai-objectstorage -o json | jq -r '.data."rootPassword"' | base64 -d)
kubectl --namespace dai port-forward service/minio 9000:9000 &
PID=$!
mc alias set minio http://localhost:9000 $ROOT_USER $ROOT_PASSWORD -api S3v4
mc mb minio/assets
mc mb minio/screenshots
mc cp --recursive --quiet backup/ minio/
kill $PID

This assumes you've used the default configuration to have separate assets and screenshots buckets, in which case you need to create the buckets with mc mb before you can restore.

Upgrading

The general procedure for upgrading is the same as any Helm release:

  • Backup your PostgreSQL and object storage data, depending on how you've deployed it.
  • fetch and modify your values as needed for the new release with helm get values and a text editor.
  • run helm upgrade.

Each Eggplant DAI release may have specific additional steps. So before applying this procedure, please review the notes below for the upgrade you're performing.

Upgrading DAI 25.3 to 25.4

The DAI upgrade to 25.4 includes changes to how MinIO, PostgreSQL and RabbitMQ are deployed.

You must take backups and follow the steps documented below prior to the upgrade to avoid data loss.

Preparation

  1. Backup MinIO data

    ヒント

    This section is only required if you are using the default installation shown at the start of this document, which installs MinIO for you. This section can be skipped if you are using your own existing MinIO installation or AWS S3.

    Before proceeding, you will need to install version RELEASE.2025-08-13T08-35-41Z or newer of the MinIO CLI client from https://github.com/minio/mc.

    ROOT_USER=$(kubectl --namespace dai get secret dai-objectstorage -o json | jq -r '.data."root-user"' | base64 -d )
    ROOT_PASSWORD=$(kubectl --namespace dai get secret dai-objectstorage -o json | jq -r '.data."root-password"' | base64 -d)
    kubectl --namespace dai port-forward service/minio 9000:9000 &
    PID=$!
    mc alias set minio http://localhost:9000 $ROOT_USER $ROOT_PASSWORD -api S3v4
    mkdir backup
    mc cp --recursive --quiet minio/ backup/
    kill $PID
  2. Scale down Eggplant DAI resources:

    All deployments and the Keycloak statefulset need to be scaled down before performing the upgrade.

    kubectl --namespace dai scale deploy --replicas 0 --all
    kubectl --namespace dai scale sts --replicas 0 keycloak
  3. Backup DAI databases

    It is important to backup the DAI databases before proceeding. Below is an example of how they can be backed up when using the default installation shown at the start of this document. If your DAI databases are located on an external Postgres server you will need to follow your own backup procedure.

    POSTGRES_PASSWORD=$(kubectl get secret postgres --namespace dai -o json | jq '.data."postgres-password"' -r | base64 -d)

    kubectl --namespace dai exec postgres-0 \
    -- /bin/sh -c \
    "export PGPASSWORD=$POSTGRES_PASSWORD && pg_dumpall --username postgres --clean" \
    > dai.dump
  4. Scale down remaining Eggplant DAI resources:

    Delete the existing RabbitMQ StatefulSet and its associated PersistentVolumeClaim (PVC) to enable a clean redeployment of the upgraded RabbitMQ instance.

    kubectl delete statefulset rabbitmq --namespace dai
    kubectl delete pvc data-rabbitmq-0 --namespace dai
    kubectl delete svc rabbitmq-headless --namespace dai

    Delete the existing Postgres StatefulSet and its associated PersistentVolumeClaim (PVC) to enable a clean redeployment of the upgraded Postgres instance.

    kubectl delete statefulset postgres --namespace dai
    kubectl delete pvc data-postgres-0 --namespace dai

    Remove MinIO Deployment and its persistent volume to prepare for upgrade

    kubectl delete deployment minio --namespace dai
    kubectl delete pvc minio --namespace dai
    kubectl delete svc minio --namespace dai

Upgrade DAI

  1. Perform helm upgrade for the deployment as usual.

Restore Postgres data

ヒント

This section is only required if you using the default installation shown at the start of this document which installs Postgres for you. This section can be skipped if your database is hosted on an external server.

  1. After successfully upgrading DAI you will need to restore Postgres data from the backup taken earlier.

    kubectl --namespace dai scale deploy --replicas 0 --all
    kubectl --namespace dai scale statefulset keycloak --replicas=0
    POSTGRES_PASSWORD=$(kubectl get secret postgres --namespace dai -o json | jq '.data."postgres-password"' -r | base64 -d)

    kubectl --namespace dai exec -i postgres-0 -- /bin/sh -c \
    "export PGPASSWORD=\$POSTGRES_PASSWORD && psql --username=postgres \
    --dbname=postgres \
    --file -" < dai.dump | tee restore.log
    kubectl --namespace dai scale deploy --replicas 1 --all
    kubectl --namespace dai scale statefulset keycloak --replicas=1
  2. Run helm upgrade for a second time

    After completing the Postgres database restore you will need to run helm upgrade a second time to ensure realm provisioning runs against the restored data.

Restore MinIO data

ヒント

This section is only required if you are using the default installation shown at the start of this document, which installs MinIO for you. This section can be skipped if you are using your own existing MinIO installation or AWS S3.

After successfully upgrading DAI you will need to restore MinIO data from the backup taken earlier.

  1. Restore MinIO data as described in Backup and restore MinIO

Upgrading DAI 25.2 to 25.3

The DAI upgrade to 25.3 includes updates to the subcharts used for MinIO, RabbitMQ and Postgres. These upgrades require the manual steps below. As the upgrades include changes to both Postgres and MinIO it is essential you take backups prior to the upgrade.

  1. Scale down Eggplant DAI resources:

    Due to changes in the RabbitMQ queue types introduced in this release, we need to ensure that no DAI 25.2 pods are running during the upgrade process. Therefore, all the deployments need to be scaled down before doing the upgrade.

    kubectl --namespace dai scale deploy --replicas 0 --all
  2. Migrate PostgreSQL database from 14.7 to 17.4:

    As of DAI 25.3 a Postgres 17.4 database is recommended. Prior to upgrading to DAI 25.3 you will need to update your database to Postgres 17.4. The method by which you do this will depend on how your database is hosted. Refer to Upgrading a PostgreSQL Cluster for more details.

    An in-place upgrade is not possible when using a container based PostgreSQL deployment. Follow the steps outlined in Backup and restore PostgreSQL for database migration.

    Therefore, the migration process involves the following steps:

    • Perform a full backup of the DAI databases.
    • Delete the associated persistent volumes.
    • Upgrade PostgreSQL to version 17.4.
    • Restore the databases from the backup.
  3. Remove MinIO Deployment and retain its persistent volume to prepare for upgrade:

    kubectl --namespace dai delete deploy minio --cascade=orphan
  4. Perform helm upgrade for the deployment as usual.

Uninstalling

You can uninstall Eggplant DAI by either running helm uninstall or by removing the namespace you installed it to.

If you applied any customizations to use external resources, like a PostgreSQL instance or an S3 bucket, you'll need to remove these separately.

Values

Full documentation of all the supported values in the Eggplant DAI chart.

Support

Contact Eggplant Customer Support if you require further assistance.