Migrating GraphDB Configurations¶

To migrate from one GraphDB version to another, follow the instructions in the last column of the table below, and then the steps described further down in this page.

Important

Ontotext Refine is now developed as a separate product. All existing OntoRefine and RDF mapping functionalities remain available in it. See more in the Refine documentation.

Warning

The OntoRefine CLI is not available in Refine version 1.0 and will be available in 1.1. If you need this functionality, use OntoRefine with GraphDB 9.11 until Refine 1.1 is released. See OntoRefine CLI in the GraphDB 9.11 documentation.

Compatibility between the versions of GraphDB, Connectors, and third-party connectors¶

GraphDB

RDF4J

Connectors

Elasticsearch

Lucene

Solr

Kafka

10.0.0

4.0.2

16.0.0

7.16.3

8.11.1

8.11.1

2.8.0

Introduced the new high-availability cluster where any node can be a leader or a follower (akin to the master and worker nodes in the old cluster). See the detailed migration procedure with a cluster below.

Introduced new single repository type that replaces the existing Free, SE, and EE repositories: existing repositories will be automatically converted to the new type. If you have existing repository configuration templates outside a GraphDB installation, you need to convert them to the new type before using them with GraphDB 10.

Redesigned filtering mechanism in the connectors: you need to rewrite the filters and recreate the connectors. See Migrating connectors below.

The GraphDB REST API has been refactored, with some of the changes including: moving the Import and SPARQL template controllers to a new base URL, the use of kebab-case for compound words in URLs, the removal of the Header X-GraphDB-Password from the Security management controller, and more. See Using the GraphDB REST API for more information.

Refactored remote locations – they cannot be activated any more but all repositories in remote locations are accessible via the Workbench.

OntoRefine has been removed from GraphDB and is now developed as a separate product - see note above.

Hint

Migrating without a cluster¶

Warning

Keep in mind that after that, you cannot automatically revert to GraphDB 9.x.

1. Stop the GraphDB 9.x instance.

2. Back up your repositories and configuration – this will ensure you can revert safely if something goes wrong during the upgrade.

1. To back up all repositories, copy the data directory. See also Backing up and Restoring a Repository for additional ways to backup repository data.

2. To back up all configuration, copy the work and conf directories.

3. Your existing GraphDB 9.x home directory (containing the conf, data, and work directories) can be used directly as the GraphDB 10.0 home directory.

Hint

You can also copy the the conf, data, and work directories from the GraphDB 9.x home directory to a new directory to use as the GraphDB 10.0 home directory. In this case, your GraphDB 9.x home directory is also the backup so you may skip the backup steps.

The various directories are described in detail here.

4. Start GraphDB 10.0.

5. If you use any GraphDB connectors, please follow the guidelines in Migrating connectors.

Migrating with a cluster¶

The cluster in GraphDB 10 is based on an entirely new approach and is not directly comparable or compatible with the cluster in GraphDB 9.x. See the High Availability Cluster Basics for more details on how the new GraphDB cluster operates.

The described procedures refer to the three recommended cluster topologies in the 9.x cluster: single master with three or more workers; two masters sharing workers, one of the masters is read-only; and multiple masters with dedicated workers. See more about 9.x cluster topologies.

Understand¶

You will need an existing GraphDB 9.x cluster in good condition before you start the migration. Data and configuration will be copied from two of the nodes:

• A worker node that is in sync with the master. This node will provide:

• The data for each repository that is part of the GraphDB 9.x cluster.

• Any repositories that are not part of the cluster, e.g., an Ontop repository created on the same instance as the worker repository. Typically, these are used via internal SPARQL federation in the cluster.

• A master node that will provide:

• The user database containing users, credentials, and user settings.

• Any repositories that are not part of the cluster, e.g., an Ontop repository created on the same instance as the master repository. Typically, these are used by connecting to the repository via HTTP – directly or via standard SPARQL federation.

• The graphdb.properties file that contains all GraphDB configuration properties.

The instructions below assume your GraphDB 9.x setup has a single home directory that contains the conf, data, and work directories. If your setup uses explicitly configured separate directories for any of these, you need to adjust the instructions accordingly. The various directories are described in detail here.

Important

The cluster in GraphDB 10 is configured at the instance level, while the cluster in GraphDB 9.x is defined per repository. This means that every repository you migrate following the steps below will automatically become part of the cluster.

Once a cluster is created, it is not possible to have a repository that is not part of the cluster in GraphDB 10.

Prepare¶

In order to minimize downtime during the migration, you may want to keep the GraphDB 9.x cluster running in read-only mode while performing the migration.

To make a master read-only, go to Setup ‣ Cluster, click on the master node and enable the read-only setting:

Alternatively, you can reconfigure your application such that it does not do any writes during the migration.

Procedure¶

To migrate a cluster configuration from GraphDB version 9.x to the 10.0 cluster, please follow the steps outlined below.

Warning

The instructions are written in such a way that your existing GraphDB 9.x setup is preserved so you can abort the migration at any point and revert to your previous setup. Note that once you decide to go live with the migrated GraphDB 10 setup, there is no automatic way to revert that configuration to GraphDB 9.x.

1. First, choose a temporary GraphDB 10 home directory that will be used to copy files and directories and bootstrap all the nodes.

Hint

All instructions below mean this directory when “temporary GraphDB 10 home directory” is mentioned.

2. Select one of the worker nodes that is in sync with the master.

3. Stop the GraphDB 9.x instance where the worker node is located – the rest of the GraphDB 9.x cluster will remain operational.

4. Locate the data directory within the GraphDB 9.x home directory of the worker node and copy it to the temporary GraphDB 10 home directory.

• The data/repositories directory contains all repositories and their data.

• If any repository is a master repository, delete it from the copy.

5. Select one of the master nodes.

6. Stop the GraphDB 9.x instance where the master node is located – you may want to point your application to another master or a worker repository so that read operations will continue to work during the migration.

7. Locate the data directory within the GraphDB 9.x home directory of the master node and copy it to the temporary GraphDB home directory.

• The data/repositories directory contains all repositories and their data.

• If any repository is a master repository, do not copy it.

• If you have only master repositories on the master node you can skip this step.

8. Locate the work directory within the GraphDB 9.x home directory of the master node and copy it to the temporary GraphDB home directory.

• On GraphDB 9.x, the work directory contains the user database.

Note

After copying the work directory from the master to the new nodes, the old locations of the GraphDB 9.x cluster workers will be visible in the Workbench of the new nodes. We recommend deleting the old locations.

9. Locate the conf directory within the GraphDB 9.x home directory of the master node and copy it to the temporary GraphDB home directory.

10. Choose the number of nodes for the new cluster. Due to the nature of the Raft consensus algorithm on which the GraphDB 10 cluster is based, an odd number of nodes is recommended, e.g., three, five, or seven.

As a rule of thumb, use as many nodes as the number of workers you had but add or remove a node to make the number odd. For example:

• If you had three workers, use three nodes.

• If you had six workers, use five or seven nodes.

11. Copy the temporary GraphDB 10 home directory to each node to serve as the GraphDB 10 on that node.

12. Edit the graphdb.properties file on each node to reflect any settings specific to that node, e.g., graphdb.external-url or SSL certificate properties but keep general properties, especially graphdb.auth.token.secret and any security-related properties identical on all nodes.

• If necessary, consult the graphdb.properties file on that node from your GraphDB 9.x setup.

• If the nodes are hosted on the same machine, edit the graphdb.connector.port property so that it is different for each node.

• See also the notes on configuring networking properties related to the GraphDB 10 cluster.

13. Start GraphDB 10 on each node.

• Make sure each node is up and has a valid EE license. If no license is applied, you will be able to create the cluster with all nodes in state Follower - no leader will be elected. However, if you attempt to run a query on any of them, their state will change to Restricted.

14. On any of the instances that you just created, go to Setup ‣ Cluster in the Workbench and create the cluster group.

• You can also create it via the Workbench REST API.

15. If you use any GraphDB connectors, please follow the guidelines in Migrating connectors.

Reverting the procedure¶

You can revert to your old setup by restarting the worker and master nodes that you stopped while performing the migration.

If you set your master to read-only, do not forget to set it back to write mode using the same Workbench interface that you used to make it read-only.

Example migration¶

Given the following GraphDB 9.x cluster setup consisting of two masters and three workers for each master, or a total of eight GraphDB instances:

graphdb1.example.com
• Master repository master1, the primary master repository

• Worker repository mydata, which is not part of any cluster

graphdb2.example.com
• Master repository master2, the secondary master repository

graphdb3.example.com
• Worker repository worker1 connected to master1

• Ontop repository sql1

graphdb4.example.com
• Worker repository worker2 connected to master1

• Ontop repository sql1

graphdb5.example.com
• Worker repository worker3 connected to master1

• Ontop repository sql1

graphdb6.example.com
• Worker repository worker4 connected to master2

• Ontop repository sql1

graphdb7.example.com
• Worker repository worker5 connected to master2

• Ontop repository sql1

graphdb8.example.com
• Worker repository worker6 connected to master2

• Ontop repository sql1

You choose the worker worker1 and the master master1 to perform the migration.

After completing the steps that copy files from the worker and the master, you should have a directory structure in the temporary GraphDB 10 home that looks like this:

Directory

Description

data/repositories/worker1/

The worker repository copied from the worker node

data/repositories/sql1/

The Ontop repository copied from the worker node

data/repositories/mydata/

The non-clustered worker repository copied from the master node

conf/graphdb.properties

The GraphDB configuration file copied from the master node

work/workbench/settings.js

The GraphDB 9.x Workbench settings and user database copied from the master node

There may be other files in the data, conf, and work directories, e.g., conf/logback.xml, that are safe to have in the copy in order to preserve as much of the same configuration as possible.

Note, however, that you should NOT have the following directories:

Directory

Description

data/repositories/master1/

The master repository from the master node should NOT be copied

Since you have six workers in the GraphDB 9.x cluster, it makes sense to choose five (the number of workers minus one to make the number odd) nodes for the GraphDB 10.0 cluster.

If you proceed with the migration, your cluster will contain three repositories that are part of the same cluster:

Repository ID

Description

worker1

Migrated GraphDB repository – note it uses the repository ID from the worker node you used to copy the files from

sql1

Migrated Ontop repository

mydata

Migrated GraphDB repository that previously was not part of any cluster

Configuring external cluster proxy¶

See how to configure the external GraphDB 10.0 cluster proxy here.

Migrating connectors¶

GraphDB 10.0 introduces major changes to the filtering mechanism of the connectors. Existing connector instances will not be usable and attempting to use them for queries or updates will throw an error.

If your connector definitions do not include an entity filter, you can simply repair them.

If your connector definitions do include an entity filter, you need to rewrite the filter using the new filter options.

See the migration steps from GraphDB 9.x for Lucene, Solr, Elasticsearch, and Kafka.

Migrating plugins in a cluster¶

When upgrading to a newer GraphDB version, it might contain plugins that are not present in the older version. In this case, and when using a cluster, the Plugin Manager disables the newly detected plugins, so you need to enable them by executing the following SPARQL query:

insert data {
[] <http://www.ontotext.com/owlim/system#startplugin> "plugin-name"
}


Then create your plugin following the steps described in the corresponding documentation, and also make sure to not delete the database in the plugin you are using.

You can also stop a plugin before the migration in case you deem it necessary:

insert data {
[] <http://www.ontotext.com/owlim/system#stopplugin> "plugin_name"
}


Migrating Helm charts¶

From version 9.8 onwards, GraphDB Enterprise Edition can be deployed with open-source Helm charts. See how to migrate them to GraphDB 10.0 here.