Cluster Basics

GraphDB EE is a high-performance, clustered semantic repository scaling in production environments with simultaneous loading, querying and inferencing of billions of RDF statements. It supports automatic failover, synchronization, and load balancing to maximize cluster utilization.

Every GraphDB cluster has at least one master node that manages one or more worker nodes, each hosting the full database copy. The master node provides an automatic failover and load balancing between worker nodes. Multiple masters enable continuous cluster performance even in the event of a master node failure. The cluster supports dynamic reconfiguration when operational, which enables changing the cluster topology at runtime.


The GraphDB EE cluster is made up of two basic node types: masters and workers.


Master node

The master node manages and distributes atomic requests (query evaluations and update transactions) to a set of workers. It does not store any RDF data itself, therefore requires limited resources.


The master repository fully implements the RDF4J API. From the client’s perspective, it is a standard database node, which ensures the transparent upgrade from a single database instance to a cluster.

The master node is responsible for:

  • coordinating all read and write operations;

  • ensuring that all worker nodes are synchronized;

  • propagating updates (insert and delete tasks) across all workers and checking updates for inconsistencies;

  • load balancing read requests (query execution tasks) between all available worker nodes;

  • providing a uniform entry point for client software, where the client interacts with the cluster as though it is just a normal RDF4J repository;

  • providing a JMX Interface for monitoring and administrating the cluster;

  • automatic cluster re-configuration in the event of failure of one or more worker nodes;

  • user-directed dynamic configuration of the cluster to add/remove worker nodes.


A cluster can contain more than one master node. Thus, every master monitors the health of all workers and can distribute query execution tasks between them, effectively allowing a cluster to have multiple entry points for queries, and preventing any single point of failure.

Worker nodes

Worker nodes are like non-clustered database serving the entire repository. They remain accessible for direct read operation and act like a standard repository. However, a worker node part of the cluster will reject any direct write transaction not sent from the master.

Worker nodes require extensive resources as they are responsible for:

  • Storing all the information;

  • Executing all read/write operations.

WRITE requests (loading data)

Every write request is tested on a worker node to verify its validity. If it is successfully processed, the master will apply it to all other available workers. The master keeps the sequence of all write operations in a transaction log to guarantee that the workers will remain in a consistent state.


After the processing of a specific write, if the target worker does not result in the same data state as the first test worker, it will start an automatic replication from a repository copy in a correct state.


READ requests (query evaluations)

Every read request is dispatched to an available worker node. The master selects a worker based on its system load and the configured consistency mode. When the cluster is in strict consistency mode or the request includes the local consistency flag, the master will dispatch the query to a worker with the most recent time stamp.

All read requests go to a priority queue rearranged after every read operation. If an unexpected error occurs, such as server time-out or lost connection, the request is re-sent to another worker node. Requests that are consistently failing result in an error message returned to the client.


Worker node synchronization and replication

When a master detects a new worker node, it will choose how to synchronize it automatically. If the transaction log contains the full transaction history, the master may apply all missing updates to the worker.


Masters keep only limited transactions controlled by the LogMaxSize & LogMaxDepth parameters in the JMX console.

  • LogMaxSize (JMX) / tx-log-max-size ( the maximum number of stored transactions in the log (default 100,000);

  • LogMaxDepth (JMX) / tx-log-max-depth ( the maximum hours to store transactions (default 7 days);


Otherwise, if the master no longer keeps the missing transaction, the only possible option for the master is to replicate the worker from another instance or a cluster backup.


In the event of an out-of-sync worker that has reported an unknown state, the master will instruct the failed worker to replicate directly from a good worker.



Having a cluster with a master and at least three workers ensures that the cluster will remain fully functional in the event of any node failure.

During replication, the master remains fully functional.


Master peering

To operate multiple master nodes, they need to maintain knowledge over the current cluster state. The peering allows the master nodes to synchronize their transaction logs according to the master modes:

  • Read/write mode: the default mode where the master can process new write transactions and will broadcast them to all its peers before applying the write operation to all linked worker nodes;

  • Read-only mode: the master can receive write transactions only from other masters and will not apply the remote transactions to any worker nodes. All the other masters will wait for this master to acknowledge the transaction before sending it to their workers;

  • Muted mode: the master can receive transactions only from other masters and will apply the remote transaction to all locally managed workers. All other masters will not wait for this master to acknowledge the transaction before sending it to their workers;



Having a cluster with two peered masters ensures high availability without any single point of failure.

Cluster backup

The master selects a worker with a full transaction log, switches it to read-only, and instructs it to create a backup. After finishing the backup, the read-only worker goes in a read-write mode and synchronizes with the other workers.



During the backup operation, the cluster remains in read-write mode and can process updates if there are other workers to process the requests.