Deploying Eggplant DAI in Containers
Before proceeding with the installation of Eggplant DAI in containers, you should ensure engineers in your organisation are Certified Kubernetes Administrators (https://www.cncf.io/training/certification/cka/) or have equivalent experience.
Eggplant DAI can be installed on Kubernetes using Helm. You will need to meet the following requirements:
Requirement | Notes |
---|---|
Kubernetes cluster | Tested version 1.29. |
ingress-nginx | Tested version 1.10.0 (chart version 4.10.0). |
Keda v2 | Optional, for autoscaling engines. Tested version 2.13.2. |
Eggplant DAI license | Speak to support if needed. |
When you've met those requirements, you can install the default Eggplant DAI deployment by creating a Helm values file. In the example below, substitute all the values to work for your own deployment.
global:
postgresql:
auth:
postgresPassword: postgres
ingress:
host: dai.example.com
keycloak:
host: dai.example.com
devLicense: a-real-license-goes-here
execLicense: a-real-license-goes-here
featLicenses: comma-separated-feature-licenses-go-here
objectStorage:
minio:
rootUser: "eggplant"
rootPassword: "eggplant"
keycloak:
externalDatabase:
# This must match the value of global.postgresql.auth.postgresPassword
password: postgres
keycloak-user-provisioner:
adminUsers:
daiAdmin:
username: admin-username
email: admin-email
password: admin-password
A few notes:
global.ingress.host
andglobal.keycloak.host
do not have to be the same domain, but they do have to be resolvable. You can do this either by having something like ExternalDNS deployed on your cluster or manually creating the records and pointing them at your cluster.- When running in containers DAI must be used in conjunction with TLS. The TLS can either be terminated within the cluster by adding a certificate to the ingress or on an external loadbalancer. See the TLS configuration section below for details.
- The hostname configured under
global.ingress.host
must be solely for DAI use. Running other applications on the same subdomain is not supported. keycloak-user-provisioner.adminUsers.daiAdmin.password
must be at least 12 characters long. You can add additional admin users by adding extra keys underkeycloak-user-provisioner.adminUsers
.- DAI makes use of configuration snippets within the ingress rules. If you are running a recent version of the ingress-nginx controller, then you must ensure that this is configured to allow-snippet-annotations. This can be achieved by setting
controller.allowSnippetAnnotations
totrue
if installing ingress-nginx with helm.
Full documentation for all values can be found here.
Then deploy it to your Kubernetes cluster:
$ helm upgrade --install \ --namespace dai \ --create-namespace \ dai \ --repo oci://harbor.dai.eggplant.cloud/charts/dai \ --version 1.13.9 \ --values dai.yaml \ --wait Release "dai" does not exist. Installing it now. NAME: dai LAST DEPLOYED: Fri Feb 17 08:20:17 2023 NAMESPACE: dai STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing dai.
Your release is named dai.
To learn more about the release, try:
$ helm status dai $ helm get all dai
You can access your DAI instance by visiting dai.example.com.
admin username: admin-username admin password: admin-password
The Helm chart installs these dependencies, but it doesn't manage backups of data stored in PostgreSQL or MinIO. You need to arrange backups of these services in a production deployment as part of your disaster recovery plan. There's an example of one approach to backups later in this documentation.
Supported customizations
In the default installation above, all dependencies are deployed to Kubernetes with data stored in persistent volumes for PostgreSQL and MinIO. If you have an existing solution in place for PostgreSQL or AWS S3 compatible object storage you want to use instead, you can customise the Eggplant DAI installation to use these. Further, you may want to pass credentials using Kubernetes secrets rather than in the values file for improved security.
This section of the documentation gives examples showing how to customize your installation. All examples will use secrets for credentials. All the examples given are just snippets that are meant to be added to the default installation values demonstrated above.
Object storage configuration
Eggplant DAI depends on an S3 compatible object storage solution for persisting assets such as test screenshots. The Helm chart gives several options for configuring this.
Bundled MinIO (Default)
By default, the Eggplant DAI Helm chart deploys MinIO as a sub-chart with a random root-user and root-password.
You can override these random values by providing an existing secret. First, prepare an existing secret with credentials in:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-objectstorage
stringData:
root-user: username
root-password: password
$ kubectl -n dai apply -f dai-objectstorage.yaml
Then update your values file to point to the existing secret and run Helm upgrade:
global:
objectStorage:
minio:
existingSecret: dai-objectstorage
minio:
auth:
existingSecret: dai-objectstorage
$ helm upgrade dai oci://harbor.dai.eggplant.cloud/charts/dai --version 1.13.9 -f dai.yaml --wait
Note global.objectStorage.minio.existingSecret
and minio.auth.existingSecret
must match.
You can further customize your MinIO installation by passing values under the minio
key in your values file. MinIO installation is provided by the Bitnami chart, so please refer to their documentation for available options.
Changes to the MinIO configuration will not be supported by Eggplant.
Existing MinIO
If you have an existing MinIO installation you can use this instead as follows, using the same secret created above.
global:
objectStorage:
minio:
existingSecret: dai-objectstorage
endpoint: my.minio.deployment.example.com
minio:
enabled: false
Note the minio
key setting enabled
to false
. This prevents the bundled MinIO from being deployed.
Eggplant cannot provide support for MinIO installations external to your DAI installation.
S3
AWS S3 can be configured for object storage with an existing secret as follows. First, prepare an existing secret with credentials in:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-objectstorage
stringData:
aws-access-key-id: my-access-key-id
aws-secret-access-key: my-secret-access-key
$ kubectl -n dai apply -f dai-objectstorage.yaml
Modify your values file to update or add the following keys as follows:
global:
objectStorage:
provider: "aws"
aws:
existingSecret: dai-objectstorage
awsAccessKeyIdKey: aws-access-key-id
awsSecretAccessKeyKey: aws-secret-access-key
region: "eu-west-1"
minio:
enabled: false
Now you can deploy it to your cluster with Helm.
PostgreSQL
Eggplant DAI uses PostgreSQL for data storage. The Helm chart provides several options for configuring it.
Bundled PostgreSQL (Default)
By default, the Eggplant DAI Helm chart deploys PostgreSQL as a sub-chart with username and password both set to postgres
.
To override this, create a secret with credentials in:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: dai-postgres
stringData:
postgres-password: my-access-key-id
Modify your values file to update or add the following keys and apply it to your cluster with Helm:
global:
postgresql:
auth:
existingSecret: dai-postgres
keycloak:
externalDatabase:
existingSecret: dai-postgres
existingSecretPasswordKey: postgres-password
Note keycloak.externalDatabase.existingSecretPasswordKey
: by default, the Bitnami chart expects the existing secret to have the database password under the key password
, but the Bitnami PostgreSQL chart and DAI default to postgres-password
as the key. You can either overrirde the behaviour of the Keycloak chart, as above, or alternatively you could set global.postgresql.auth.secretKeys.adminPasswordKey
.
The PostgreSQL installation is provided by the Bitnami chart. You can further customize it by passing options to it under the postgresql
key in your values file. See the Bitnami documentation for available options.
If you override extraEnvVars
, you need to ensure you also set the POSTGRESQL_DATABASE
environment variable to keycloak
. This creates the Keycloak database that is configured under the keycloak
key in the default values.
Eggplant cannot support changes to the PostgreSQL configuration.
Existing PostgreSQL
If you have an existing PostgreSQL installation, or would like to use an external service like AWS RDS, you can do so.
Using the same existing secret as above, modify your values file to set the following keys:
global:
postgresql:
host: my.postgresql.host.example.com
auth:
existingSecret: dai-postgres
keycloak:
externalDatabase:
existingSecret: dai-postgres
existingSecretPasswordKey: postgres-password
host: my.postgresql.host.example.com
postgresql:
enabled: false
Note that if you use an existing PostgreSQL deployment, you also need to update the Keycloak configuration to use this.
Engine scaling
The engine component of Eggplant DAI is used for test execution and report generation. As your DAI instance becomes busier, you will need to scale this component to handle greater test volumes. We recommend using Keda to manage this scaling.
To use Keda, first install it according to upstream instructions.
Only Keda v2 is supported.
Then enable Keda by adding the following to your values file:
ai-engine:
keda:
enabled: true
If you can't use Keda for some reason, you can manually manage the number of engine replicas by adding the following to your values file, increasing it as your instance becomes busier.
ai-engine:
replicaCount: 2
Keycloak
Eggplant DAI depends on Keycloak for authentication and authorization services. We bundle this as a sub-chart and do not currently support using your own Keycloak installation.
Configuring TLS
When running DAI in containers, the use of TLS certificates is mandatory. However, starting with DAI version 7.3 it is possible use TLS certificates signed by a private certificate authority (see the Adding a custom TLS Certificate Authority (CA) section below). How you add TLS is up to you and is likely to depend on your infrastructure. This may include:
- Adding the TLS certificates to a loadbalancer, external to the cluster and terminating the TLS connections there
- Adding wildcard certificates to the ingress-nginx controller so that all ingress rules / hosts can share a common certificate
- Adding a Kubernetes secret containing the TLS certificates to the DAI namespace and supplying the secret details to DAI via values.
The first two options will be dependent on your infrastructure and do not require DAI configuration changes. If you would like to add a TLS certificate to DAI you can:
-
Obtain your certificate and key in PEM format. The certificate should include the full chain.
-
Create the TLS in the DAI namespace. (note that you may have to create the namespace first).
kubectl create secret tls tls-secret --cert=path/to/cert/file --key=path/to/key/file -n dai
-
Update the ingress section of your DAI helm values file to add
tls
sections underglobal.ingress
andglobal.keycloak
as per below (update the hostname and secret name to suit your installation):dai.yamlglobal:
ingress:
host: dai.example.com
tls:
- hosts:
- dai.example.com
secretName: tls-secret
keycloak:
tls:
secretName: tls-secret -
Complete the helm install as normal.
Adding a custom TLS Certificate Authority (CA)
The individual DAI services use Keycloak for authentication and authorization of requests. This requires that the services can reach the Keycloak endpoint, including validating the TLS certificates that have been configured. Each release of DAI ships with up-to-date CA certificates allowing most publicly signed certificates to be validated. If your DAI instance is unable to verify the configured TLS certificate, you can supply the associated CA via the helm values by setting the global.ingress.customCACert
value.
The example snippet below shows a private CA certificate being added to the DAI. This would allow TLS signed by the CA to be used with DAI.
global:
ingress:
host: dai.example.com
customCACert: |
-----BEGIN CERTIFICATE-----
MIIFYDCCA0igAwIBAgIJAL6zT1uUli/yMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX
aWRnaXRzIFB0eSBMdGQwHhcNMjQwNDEwMTQ0NTU3WhcNMjkwNDA5MTQ0NTU3WjBF
MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50
ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC
CgKCAgEAyP3ve0Y3Y9Wh0G6dxWGQSdOPzVRgzv2adUV763GQOiEnSy8Z44X9rYNS
s6Yd80H5hDuHFPPTHeQdOQ+BVFlUqKT1nsVzDwwy9eApHLRiv3CXKE749sqRiFiQ
PwEGxk2zjRqOFm2phFKY81ikdazjd9Kl9bfdALrmJk6vFCHV3bwIgOFlkl42nWT+
cAnjZEo4p9pejKyccC/43L/BiVZ3ILYKmAhtQFgtBfX0STlV1eqvr0tKOU1WcHdb
AkUoWgmymEAqKOTgF3mptjeg2a+lAaXPMD25vUI5OZ3Kn+kGoL2bRM/3ujQdwv+0
reSvah2+XmJHeaGA3Mr00IGJF85H9YQDgrU9/nvJQX0NyDjO9Z4qViROU40nCMrf
41cBmD9e5DZL5bSyhAiNlbq0MplIvm2ervlvou6ymPfVkQmdet5rGNe3+XOFbamc
c2j65tgUm6+KSoho7xi+3PvmRDwgplTKLavcdZMHxjVWp6gQm6PVOxrheII+ZuIS
Au3ixVaN4anPRlV02EvkBu8Hf67faL6sL65kTYkL98BSdpIepJkHEBONv9DuCH9g
5jiGHXPyBkMuga7uxF6OTVR/SHd+io0ookbUyfuSIWywBBHvDu0cJG8CmrdptzCv
YBGg7fjz8IOrawOdv/3VVEzixe+qVr3HFT+eO3MMf9eNCo7M7U8CAwEAAaNTMFEw
HQYDVR0OBBYEFKSpIbnsb6Fff2dbiJYxLrtiyoGKMB8GA1UdIwQYMBaAFKSpIbns
b6Fff2dbiJYxLrtiyoGKMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQAD
ggIBAJD/rb1XpIXchX58dG6AV+uPtZ7ek1kqJHTA6fHW7UchZrtXW27HCraPnZ9m
hN/iiXxyflPWQFpndAWA3Zzor3eKZMPek5DvXa5yosC/2/kRlw+GkIXhrJEKvOaM
tIOXgKMuwZyLsYV5PD1Y3J1q+jx58euiZtsQRIDM8K7BuzmMruvWGXzxT2Vm54ZS
6s597X2YKlSu/34+G9a/8N9OqiULp+k0QKz/DXOpWXk8q9u95Ga2WExfsSH8H+ZW
2swVt9shdH5/L7Jbpm9Kq1acyI9WPh9oXDsGIcoWh10zblgqajIMzYX22tHjJc7M
GcoLubn9sSkJgqP46IfOngmbH2Ik9mDylXl11LPgrw8XudMh/Uf5RkvsJX9tAmib
IHrM2n0tSQiuZA16suABvGsmQkdhBzhFnpZJfgmxVXnEmwU8k2cUByil09ZoKtYw
7YvMz+p+uFdl+uoOhWOLbfSftymMkeHNRdykiNCXqCjPzZgHVwHCCIPIDWpIbb7w
qkzoUw6dp1YOz4RO0ZsJ7zhztSsSGHinvbH1TnXzdCj1OjkP612/lRJB3NC99rbg
HFKMUeiXEtZbMtTYaBxJ9vWxtkbeVwaWZommZVjDk6kYFiUsShij9olN78JcZB6/
2TxFcGsToiq0hJQ19soqTKuBP79TCpeQpOQj/glYB8MZbU+R
-----END CERTIFICATE-----
Configuring Node Selectors
Node selectors can be added via the helm values to control which Kubernetes nodes the DAI services run on. You can set the Node Selector for all the DAI services by setting global.dai.nodeSelector
in values. As DAI also uses third party components these have to be set separately within the values file. The example snippets below show the Node Selector can be set to use only nodes with the label node-pool=application
for all DAI services and in the Postgresql, Rabbitmq, Minio and Keycloak sub-charts.
global:
dai:
nodeSelector:
node-pool: application
postgresql:
primary:
nodeSelector:
node-pool: application
rabbitmq:
nodeSelector:
node-pool: application
minio:
nodeSelector:
node-pool: application
keycloak:
nodeSelector:
node-pool: application
Backup and restore
You must regularly back up configuration and results data from your DAI installation. Data that needs to be backed up is stored in PostgreSQL (as configured for DAI and Keycloak) and in object storage.
How you backup this data will depend on how you've configured your deployment, but here we provide an example of how both can be backed up in the default installation shown at the start of this document.
Backup and restore PostgreSQL
Eggplant DAI uses several databases to store its data, therefore we recommend using pg_dumpall
in the default installation to ensure you backup all databases. If you're using a shared database instance, you need to ensure you backup the following databases:
- execution_service
- keycloak
- sut_service
- ttdb
- vam
In the example below, we execute pg_dumpall
directly in the default PostgreSQL pod. The result is then streamed to dai.dump
file on the local computer:
$ kubectl --namespace dai exec postgres-0 \
-- /bin/sh -c \
'export PGPASSWORD=$POSTGRES_PASSWORD && pg_dumpall --username postgres --clean' \
> dai.dump
The command given here includes the --clean
option. This causes pg_dumpall
to include commands to drop the databases in the dump. This makes restoring it easier, but you should know this will happen.
In reality, you would likely want to:
- compress the dump
- put it on a backup storage server
- execute on a schedule.
But the use of pg_dumpall
would still stand.
To restore the backup, you can reverse the process as follows:
$ kubectl --namespace dai exec postgres-0 \
-- /bin/sh -c \
'export PGPASSWORD=$POSTGRES_PASSWORD && psql --username postgres \
--dbname postgres \
--file -' < dai.dump
A few notes:
- We used the
--clean
option when creating the dump. This means all databases in the backup will be dropped and recreated. - We specify
--dbname postgres
. As the backup was created with--clean
, you'll get errors if you connect to one of the databases being dropped as part of the restore.
Backup and restore MinIO
Images and other assets are stored in object storage rather than the database. You must back these up in addition to the database content discussed above. A quick way to run this backup from your local machine is demonstrated below. The example below requires you to have the MinIO clienttools installed locally.
$ ROOT_USER=$( kubectl get secret dai-objectstorage -o json | jq -r '.data."root-user"' | base64 -d )
$ ROOT_PASSWORD=$( kubectl get secret dai-objectstorage -o json | jq -r '.data."root-password"' | base64 -d)
$ kubectl port-forward service/minio 9000:9000 &
$ PID=$!
$ mc alias set minio <http://localhost:9000> $ROOT_USER $ROOT_PASSWORD -api S3v4
$ mkdir backup
$ mc cp --recursive --quiet minio/ backup/
$ kill $PID
As before, it's likely you'll want to compress the backup, move it to an appropriate storage server and execute it on a schedule.
To restore the backup, you can just reverse the copy command:
$ mc mb minio/assets
$ mc mb minio/screenshots
$ mc cp --recursive --quiet backup/ minio/
This assumes you've used the default configuration to have separate assets and screenshots buckets, in which case you need to create the buckets with mc mb
before you can restore.
Upgrading
The general procedure for upgrading is the same as any Helm release:
- Backup your PostgreSQL and object storage data, depending on how you've deployed it.
- update your repositories with
helm repo update
. - fetch and modify your values as needed for the new release with
helm get values
and a text editor. - run
helm upgrade
.
Each Eggplant DAI release may have specific additional steps. So before applying this procedure, please review the notes below for the upgrade you're performing.
Upgrading DAI 7.3 to 7.4
Beyond the general guidance above there are no specific steps to follow for an upgrade from 7.3 to 7.4.
Upgrading DAI 7.2 to 7.3
Beyond the general guidance above there are no specific steps to follow for an upgrade from 7.2 to 7.3.
Upgrading DAI 7.1 to 7.2
Beyond the general guidance above there are no specific steps to follow for an upgrade from 7.1 to 7.2.
Upgrading DAI 7.0 to 7.1
Beyond the general guidance above there are no specific steps to follow for an upgrade from 7.0 to 7.1.
Upgrading DAI 6.5 to 7.0
The DAI 7.0 release includes an update to Minio that is incompatible with previous versions. If you are using the bundled Minio for object storage then it is necessary to backup the old Minio installation and restore the data post DAI upgrade:
-
Backup the existing Minio installation as described above.
-
Remove the existing Minio deployment and PVC.
kubectl delete pvc -l app.kubernetes.io/name=minio --wait=false && kubectl delete deployment -l app.kubernetes.io/name=minio
-
Review values and run
helm upgrade
. This will create a clean installation of Minio at the same time as upgrading the other DAI components. -
Restore the existing Minio data to the new Minio deployment as described above.
Upgrading DAI 6.4 to 6.5
DAI 6.5 introduces a new Helm chart that is incompatible with previous releases. The recommended upgrade procedure for this release is therefore different:
- Backup your PostgreSQL and object storage data.
- Update your repositories with
helm repo update
. - If using bundled Minio, fetch the root user and password for your Minio deployment.
- Fetch the root user and password for your Keycloak deployment.
- Fetch your existing values and translate them to the new values required.
- Uninstall your old deployment with
helm uninstall -n dai dai
. - Additionally, remove all old PVCs and jobs with
kubectl -n dai delete jobs --all && kubectl -n dai delete pvc --all
. - Install the Helm 6.5 release with the following:
helm install -n dai dai eggplant/dai --version 1.3.4 -f new-values.yaml --wait
If using bundled Minio, restore data from backup.
The exact process may vary depending on your previous deployment. Please be careful to verify backups before deleting resources and to delete the correct resources.
While we recommend reviewing the rest of the chart documentation to create a new values file, below is a mapping from keys in the pre-6.5 values file to their location in the 6.5+ values file. This is not a complete list of values, only of those that have moved. You'll still need to review the documentation for the values to ensure you set all required keys and they are set correctly.
Old Key | New Key |
---|---|
global.adminusername | keycloak-user-provisioner.adminUsers.daiAdmin.username |
global.adminEmail | keycloak-user-provisioner.adminUsers.daiAdmin.email |
global.adminPassword | keycloak-user-provisioner.adminUsers.daiAdmin.password |
global.license | global.devLicense, global.execLicense, global.featLicenses |
externalDatbase | global.postgresql |
externalBroker | global.rabbitmq |
objectStorage | global.objectStorage |
ingress.hostnames | global.ingress.host |
ingress.tls | global.ingress.tls |
keda.enabled | ai-engine.keda.enabled |
keycloak.realm | global.keycloak.realm |
keycloak.url | global.keycloak.host |
keycloak.adminUser | global.keycloak.user |
keycloak.adminPassword | global.keycloak.password |
keycloak.smtp | keycloak-realm-provisioner.smtp |
Upgrading Eggplant DAI 6.3 to 6.4
The DAI 6.4 release includes an update of the internal version of Keycloak to version 19. To upgrade to this new version:
- Edit your yaml file to move the
keycloak.adminPassword
key tokeycloak.auth.adminPassword
- Similarly if not using the default admin username move the
keycloak.adminUser
key tokeycloak.auth.adminUser
within the yaml. - The helm upgrade process deploys a new StatefulSet which is incompatible with the existing StatefulSet. The original StatefulSet therefore needs to be deleted prior to performing the helm upgrade (note that once this step is completed the DAI instance will be inaccessible until the upgrade to 6.4 is complete):
$ kubectl delete statefulsets.app -l app.kubernetes.io/name=keycloak --namespace dai
Upgrading Eggplant DAI from version 6.2 to 6.3
- If you use KEDA, v1 is no longer supported. You must upgrade to KEDA v2 before upgrading DAI. In order to do this, make sure you remove the
ai-engine
job before upgrading KEDA.
$ kubectl -n dai delete job ai-engine
Upgrading Eggplant DAI from version 5.3 to 6.0
- You no longer need to set the service token and JWT secret. Remove these values.
- The helm chart deploys a Keycloak instance in the same namespace as the rest of the DAI components. You must, however, specify the Keycloak URL, which is set to
https://kc-<ingress-hostname>
, where<ingress-hostname>
is the parameter value that you specified in the values file.
Uninstalling
You can uninstall Eggplant DAI by either running helm uninstall
or by removing the namespace you installed it to.
If you applied any customizations to use external resources, like a PostgreSQL instance or an S3 bucket, you'll need to remove these separately.
Values
Full documentation of all the supported values in the Eggplant DAI chart.
Support
Contact Eggplant Customer Support if you require further assistance.