Migration Guide

The Ontotext Platform migration guide will walk you through the steps for migrating breaking changes and deprecations introduced during the different releases of the Platform.

Migration from 3.7 to 3.8

  • Elasticsearch-related configurations (elasticsearch.*) from the Semantic Objects has been moved to the Semantic Search.

Migration from 3.5 to 3.6

  • MongoDB is removed from all Docker/Docker compose examples. If you need reference, please go to the documentation of version 3.5.

Helm Deployments

This is a version with major breaking changes that resolves a lot of issues with the old monolithic Helm chart.

This version makes use entirely of sub-chart so make sure you familiarize with their values.yaml

For more detailed information please refer to the CHANGELOG.md file which is included in the Helm chart.

Migration from 3.4 to 3.5

Before proceeding with the migration, make sure you have read the release notes for Ontotext Platform 3.5.0.

Helm Deployments

In version 3.5, the Helm chart introduces the following breaking changes:

  • High Availability deployment of PostgreSQL with a replication manager. This requires a migration of the persistent data due to a migration to Bitnami’s PostgreSQL HA chart.
  • Deprecation of MongoDB in favor of RDF4J SOML schema storage.
  • GraphDB’s official Helm chart is now used as a sub-chart.

If you wish to preserve the persistent data of existing deployments, follow the steps described below.

SOML Schema Storage Migration

This migration is enabled by default and will initialize after upgrading the chart deployment.

For more information on the migration to the RDF4J store, see Storage migration.

Migration Steps

The following steps assume an existing deployment in the default namespace named platform.


The migration will cause temporary downtime of several Platform components due to updates in their configuration maps, pod specifications, persistence changes, etc.

  1. Back up all persistent volume data.

  2. PostgreSQL migration

    1. Add Bitnami’s Helm charts repository

      helm repo add bitnami https://charts.bitnami.com/bitnami
    2. Prepare an override file named fusion-ha.migration.yaml with the following content:

      # Should be the same as in the platform's 3.5 chart
      fullnameOverride: fusionauth-postgresql
      # If the existing deployment has different passwords, update the next configurations to match
        username: fusionauth
        password: fusionauth
        database: fusionauth
        postgresPassword: postgres
        repmgrPassword: fusionauth
        replicaCount: 1
        adminPassword: fusionauth
      # Update the persistence to the required settings
        storageClass: standard
        size: 1Gi
          memory: 256Mi
    3. Install a temporary deployment of bitnami/postgresql-ha with the prepared values and wait until the new pods are running:

      helm install -n default -f fusion-ha.migration.yaml --version 7.6.2 postgresql-mig bitnami/postgresql-ha

      This deployment will serve to migrate the existing PostgreSQL data into the new HA replica set.

    4. Execute the PostgreSQL data migration with:

      kubectl -n default exec -it fusionauth-postgres-0 -- sh -c "pg_dumpall -U fusionauth | psql -U postgres -h fusionauth-postgresql-pgpool"

      Enter the password for the system postgres user from fusion-ha.migration.yaml. The default is postgres.


      If the existing deployment has different credentials, update the command above with the relevant ones.

    5. Uninstall the temporary deployment:

      helm uninstall -n default postgresql-mig

      Wait until the pods are removed. The migrated data will be stored into dynamically provisioned PVs/PVCs that will be bound when the Platform chart is upgraded later on.

  3. GraphDB migration

    Due to the migration to the official GraphDB helm chart, a migration of the PVs is needed. To migrate GraphDB’s data, the new pods must use the old pods PVs. To achieve this, follow the steps:

    1. Patch all GraphDB PVs (masters and workers) with "persistentVolumeReclaimPolicy":"Retain":

      kubectl patch pv <graphdb-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

      This will ensure that the PVs won’t be accidentally deleted.

    2. Delete the GraphDB deployment. If a cluster is used, delete all master and worker deployments.

      kubectl delete deployment.apps/<graphdb-deployment-name>
    3. Delete the GraphDB PVCs. If a cluster is used, delete all master and worker PVCs.

      kubectl delete pvc <graphdb-pvc-name>

      This will release the PVs so they can be reused by the new masters/workers.

    4. Patch the PVs with "claimRef":null so they can go from status Released to Available:

      kubectl patch pv <graphdb-pv-name> -p '{"spec":{"claimRef":null}}'
    5. Patch the PVs with claimRef matching the PVCs that will be generated by the volumeClaimTemplates.

      In Platform 3.5, the default volumes used for GraphDB are dynamically provisioned by using volumeClaimTemplates. The newly created pods must create PVCs that can claim the old PVs. In order to do this, the volumeClaimTemplates for GraphDB’s instances in the values.yaml file must be configured so that they match the PVs specs.

      For example, if you have an old GraphDB PV that is 10Gi with storageClassName: standard and with accessModes: ReadWriteOnce, then the volumeClaimTemplates for the GraphDB instance must be set like this:

           - "ReadWriteOnce"
             storage: "10Gi"
          storageClassName: standard

      After you have set the correct volumeClaimTemplates, the old GraphDB PVs must be patched so that they are available to be claimed by the generated PVCs. The PVC names generated by the GraphDB chart have the following format:

      • For masters (and standalone instance): graphdb-master-X-data-dynamic-pvc
      • For workers: graphdb-worker-Y-data-dynamic-pvc

      Where X and Y are the counters for masters and workers, respectively.

      Also, the namespace of the PVs claimrefs must be updated with the used namespace.

      The PVs patch is done like this (example for standalone GraphDB):

      kubectl patch pv graphdb-default-pv -p '{"spec":{"claimRef":{"name":"graphdb-master-1-data-dynamic-pvc-graphdb-master-1-0"}}}'
      kubectl patch pv graphdb-default-pv -p '{"spec":{"claimRef":{"namespace":"default"}}}'

      If a cluster is used, repeat this with the respective PV names and masters/workers count in the claimref name. After the patch of the PVs is done, the PVs are ready for helm upgrade. When an upgrade is done, the new GraphDB pod/pods should create PVCs that claim the correct PVs that were used by the previous GraphDB.

    6. Provisioning user

      The official GraphDB chart uses a special user for all health checks and provisioning. If you are using the Ontotext Platform with GraphDB security enabled, set graphdb.graphdb.security.provisioningUsername and graphdb.graphdb.security.provisioningPassword to a user that has an Administrator role in GraphDB, so that the health checks and provisioning jobs can work correctly.

  4. (Optional) Elasticsearch PVs

    In Platform 3.5, the default persistence is changed to use dynamic PV provisioning. If you wish to preserve any existing Elasticsearch data, set the following in your values.yaml overrides:

        storageClassName: ""

    This override will disabled the dynamic PV provisioning and will use the existing PVs.


    This step can be skipped in favor of simply rebinding the SOML schema, which will trigger reindexing in the Elasticsearch.

  5. Upgrade the existing chart deployment.

    helm upgrade --install -n default --set graphdb.deployment.host=<your hostname> --version 3.5.0 platform ontotext/ontotext-platform


    The upgrade process will take up to several minutes due to redeployment of updated components.