Workbench user guide

Checking your setup

After opening the Workbench URL, a summary page is displayed showing the versions of the various GraphDB components and license details. If you see this page, it means you have installed and configured the Workbench correctly.

Using the Workbench

Managing locations


Locations represent individual GraphDB servers, where the repository data is stored. They can be local (a directory on the disk) or remote (an end-point URL). Only a single location can be active at a time. Each location has a SYSTEM repository containing meta-data about how to initialise other repositories from the current location.

When started, GraphDB creates GraphDB-HOME/data directory as a default location.

Locations can be attached, edited and detached. To attach a data location:

  1. Go to Admin -> Locations and Repositories.
  2. Click Attach location.
  3. Enter a location:
    • For local locations, use the absolute path to a directory on the machine running the Workbench;
    • For remote locations, the URL to the GraphDB web application, e.g.,
      • (Optionally) Specify credentials for the Sesame location (user and password);
      • (Optionally) Add the JMX Connection parameters (host, port and credentials) - this allows you to monitor the resources on the remote location, do query monitoring and manage a GraphDB cluster.


The JMX endpoint is configured by specifying a host and a port. The Workbench will construct a JMX URI of the kind service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi and the remote process has to be configured with compatible JMX settings. For example:<port> -Djava.rmi.server.hostname=<host>

You can attach multiple locations but only one can be active at a given time. The active location is always shown in the navigation bar next to a plug icon.


If you use the Workbench as a SPARQL endpoint, all your queries will be sent to a repository in the currently active location. This works well if you make sure no one changes the active location. To have endpoints that are always accessible outside the Workbench, we recommend using standalone Workbench and Engine installations, connecting the Workbench to the Engine over a remote location and using the Engine endpoints (i.e., not the ones provided by the Workbench) in any software that executes SPARQL queries.


Managing repositories

To access the repository management page, go to Admin -> Locations and Repositories. This displays a list of available repositories and their locations as well as the user’s permissions for each repository.

Creating a repository

To create a new repository, click Create repository. This will display the configuration page for the new repository where a new, unique ID has to be entered. The rest of the parameters are described in the Configuration parameters section of the GraphDB documentation.


Alternatively, you can use a .ttl file that specifies the repository type, ID and configuration parameters. Click the triangle at the edge of the Create repository button and choose File.

Editing a repository

The parameters you specify at repository creation time, such as cache memory, can be changed at any point. Click the edit icon next to a repository to edit it. Note that you have to restart the relevant GraphDB instance for the changes to take effect.

Deleting a repository

Click the bucket icon to delete a repository. Once a repository is deleted, all data contained in it is irrevocably lost.

Selecting a repository

To connect to a repository, go to Locations and Repositories and click the slider button next to it.


Another way to connect to a repository is by using the dropdown menu in the top right corner. This will allow you to easily change the repository while running queries as well as importing and exporting data in other views.


Loading data into a repository

There are four ways of importing data into the currently selected repository. They can be accessed from the menu by clicking Data -> Import.

All import methods support asynchronous running of the import tasks, except for the text area import one, which is intended for a very fast and simple import.


Currently, only one import task of a type is executed at a time, while the others wait in the queue as pending.


For Local repositories, since the parsing is done by the Workbench, we support interruption and additional settings.
When the location is a remote one, you just send the data to the remote endpoint and the parsing and loading is performed there.

A file name filter is available to narrow down the list if you have many files.

Import settings

The settings for each import are saved so that you can use them, in case you want to re-import a file. They are:

  • Base URI - specifies the base URI against which to resolve any relative URIs found in the uploaded data (see the Sesame System documentation);
  • Context - if specified, imports the data into the specific context;
  • Chunk size - the number of statements to commit in one chunk. If a chunk fails, the import operations are interrupted and the imported statements are not rollbacked. The default is no chunking. When there is no chunking, all statements are loaded in one transaction.
  • Retry times - how many times to retry the commit if it fails.
  • Preserve BNode IDs - assigns its own internal blank node identifiers or uses the blank node IDs it finds in the file.

Four ways to import data

Upload files and import


The limitation of this method is that it supports files of a limited size. The default is 200MB and it is controlled by the graphdb.workbench.maxUploadSize property. The value is in bytes (-Dgraphdb.workbench.maxUploadSize=20971520).

Loading data from the Local files directly streams the file to the Sesame’s statements endpoint:

  1. Click the icon to browse files for uploading;
  2. When the files appear in the table, either import a file by clicking Import on its line or select multiple files and click Batch import;
  3. The import settings modal will appear, just in case you want to add additional settings.

Import server files

The server files import allows you to load files of arbitrary sizes. Its limitation is that the files must be put (symbolic links are supported) in a specific directory. By default, it is ${user.home}/graphdb-import/.

If you want to tweak the directory location, see the graphdb.workbench.importDirectory system property. The directory will be scanned recursively and all files with a semantic MIME type will be visible in the Server files tab.

Import remote content

You can import from a URL with RDF data. Each endpoint that returns RDF data may be used.


If the URL has an extension, it is used to detect the correct data type (e.g., Otherwise, you have to provide the Data Format parameter, which will be sent as Accept header to the endpoint and then to the import loader.

You can also insert triples into a graph with an INSERT query in the SPARQL editor.


Paste and import

You can also import data by pasting it directly in the Text area tab. This very simple text import sends the data to the Repository Statements Endpoint.


Exploring your data and class relationships

In version 7.0 GraphDB introduces powerful new features for RDF data visualisation. They allow you to thoroughly explore and analyse the imported data.


  • Help you write SPARQL queries easily;
  • Help you trace relationships and hierarchies between RDF classes;
  • Save time and effort.

Currently, the supported visualisations are RDF Class hierarchy, RDF Domain-Range Graph, and Class relationships.

Class hierarchy

To explore your data, navigate to Data -> Class hierarchy. You can see a diagram depicting the hierarchy of the imported RDF classes by number of instances. The biggest circles are the parent classes and the nested ones are their children.


If your data has no ontology (hierarchy), the RDF classes will be visualised as separate circles, instead of nested ones.


Explore your data - different actions

  • To see what classes each parent has, hover over the nested circles.

  • To explore a given class, click its circle. The selected class is highlighted with a dashed line and a side panel with its instances opens for further exploration. For each RDF class you can see its local name, URI and a list of its first 1000 class instances. The class instances are represented by their URIs, which when clicked lead to another view, where you can further explore their metadata.


    The side panel includes the following:

    • Local name;
    • URI (Press Ctrl+C / Cmd+C to copy to clipboard and Enter to close);
    • Domain-Range Graph button;
    • Class instances count;
    • Scrollable list of the first 1000 class instances;
    • View Instances in SPARQL View button. It redirects to the SPARQL view and executes an auto-generated query that lists all class instances without LIMIT.
  • To go to the Domain-Range Graph diagram, double click a class circle or the Domain-Range Graph button from the side panel.

  • To explore an instance, click its URI from the side panel.

  • To adjust the number of classes displayed, drag the slider on the left-hand side of the screen. Classes are sorted by the maximum instance count and the diagram displays only the current slider value.

  • To administer your data view, use the toolbar options on the right-hand side of the screen.

    • To see only the class labels, click the Hide/Show Prefixes. You can still view the prefixes when you hover over the class that interests you.
    • To zoom out of a particular class, click the Focus diagram home icon.
    • To reload the data on the diagram, click the Reload diagram icon. This is recommended when you have updated the data in your repository or you experience some strange behaviour, for example you cannot see a given class.
    • To export the diagram as an .svg image, click the Export Diagram download icon.

RDF domain-range graph

To see all properties of a given class as well as their domain and range, double click its class circle or the Domain-Range Graph button from the side panel. The RDF Domain-Range Graph view opens, enabling you to further explore the class connectedness by clicking the green nodes (object property class).

  • To administer your graph view, use the toolbar options on the right-hand side of the screen.

    • To go back to your class in the RDF Class hierarchy, click the Back to Class hierarchy diagram button.
    • To export the diagram as an .svg image, click the Export Diagram download icon.

Class relationships

To explore the relationships between the classes, navigate to Data -> Class relationships. You can see a complicated diagram showing only the top relationships, where each of them is a bundle of links between the individual instances of two classes. Each link is an RDF statement where the subject is an instance of one class, the object is an instance of another class, and the link is the predicate. Depending on the number of links between the instances of two classes, the bundle can be thicker or thinner and gets the color of the class with more incoming links. These links can be in both directions.

To see the exact number of links between the instances of two classes, mouse over a relationship.


In the example below, you can see that Person is the class with the biggest number of links. It is very strongly connected to Concept and most of the links are from Person to Concept. Also, you notice that all classes have many outgoing links to Concept.


When you hover over the bundle of links, you can observe the exact number of links between Person and Concept.


Autocomplete index

In version 7.0, GraphDB also introduces an Autocomplete Index, which offers suggestions for the URIs local names in the SPARQL editor and the View Resource page.

Go to Data -> Autocomplete Index and enable Autocomplete. GraphDBD indexes all URIs in the repository by splitting their local names into words, for example, subPropertyOf is split into sub+Property+Of. This way, when you search for a word, the Autocomplete finds URIs with local names containing the symbols that you typed in the editor. The Autocomplete index is disabled by default.



Use the Build Now button if you get strange results and you think the index was broken.

If you try to use autocompletion before it is enabled, a tooltip warns you that the Autocomplete index is off and provides a link for building the index.


Autocomplete in the SPARQL editor

To start autocompletion in the SPARQL editor, use the shortcuts Alt+Enter / Ctrl+Space / Cmd+Space depending on your OS and the way you have set up your shortcuts. You can use autocompletion to:

  • search in all URIs

  • search only for URIs that start with a certain prefix

  • search for more than one word


    Just start writing the words one after another without spaces, e.g., “pngOnto”, and the index will smartly split them.

  • search for numbers


Autocomplete in the View resource

To use the autocompletion feature to find a resource, go to Admin -> View resource and start typing.


Executing queries

Access the SPARQL pages from the menu by clicking SPARQL. The GraphDB Workbench SPARQL view integrates the YASGUI query editor and has some additional features.

SPARQL SELECT and UPDATE queries are executed from the same view. The Workbench detects the query type and sends it to the correct Sesame endpoint. Some handy features are:

  • A query area with syntax highlighting and namespace autocompletion - to add/remove namespaces go to Data -> Namespaces;
  • Query tabs - saved in your browser local storage, so you can keep them even when switching views.
Saved queries
Click the save icon to save a query, or the folder icon to access existing saved queries. Saved queries are persisted on the server running the Workbench.
Include or exclude inferred statements
A >>-like icon controls the inclusion of inferred statements. When both elements of the icon are the same shade of dark colour, inferred statements are included. When only the left element is dark and the right one is greyed out, only the explicit statements are included.
SPARQL view can show only a limited number of results at once. Use pagination to navigate through all results. Each page executes the query again with query limit and offset for SELECT queries. For graph queries (CONSTRUCT and DESCRIBE), all results and fetched by the server and only the page of interest is gathered from the results iterator and sent to the client.
Keyboard shortcuts
Use Ctrl/Cmd+Enter to execute a query. You can find other useful shortcuts in the keyboard shortcuts link in the lower right corner of the SPARQL editor.
Downloading query results
The Download As button allows you to download query results in your preferred format (JSON, XML, CSV, TSV and Binary RDF for Select queries and all RDF formats for Graph query results).
Various ways to view the results

Query results are shown in a table on the same page. You can order the results by column values and filter by table values.

The results can be viewed in different formats according to the type of the query and they can be used to create a Google Charts diagram.


The query results are limited to 1000, since your browser cannot handle an infinite number of results. To obtain all results, use Download As and select the required format for the data.

You will see the total number of results and the query execution time in the query results header.


The total number of results is obtained by an async request with a default-graph-uri parameter and the value

SPARLQ editor options

Since GraphDB 6.5, the Workbench introduces support for additional viewing/editing modes in the SPARQL editor.

Horizontal and vertical mode
Use the vertical mode switch to show the editor and the results next to each other, which is particularly useful on wide screen. Click the switch again to return to horizontal mode.
Viewing results or editor only
Both in horizontal and vertical mode, you can also hide the editor or the results to focus on query editing or result viewing. Click the buttons Editor only, Editor and results or Results only to switch between the different modes.

Query Monitoring and Interruption

To track and interrupt long running queries, click Admin -> Query monitoring to go to the Query monitoring view.


If you are connected to a remote location, you need to have the JMX configured properly. See how in Managing Locations


To interrupt long running queries, click the Abort query button.


The Query Monitoring view is based on the JMX console. See a description of the different attributes of the tracked query string.

Exporting data

Data can be exported in several ways and formats.

Exporting entire repository or individual graphs

Click Data -> Export from the menu and decide whether you have to export the whole repository (in several different formats) or specific named graphs (in the same variety of formats). Click the appropriate format and the download will start:


Exporting query results

The SPARQL query results can also be exported from the SPARQL view with results by clicking Download As.

Exporting resources

From the resource description page, export the RDF triples that make up the resource description to JSON, JSON-LD, RDF-XML, N3/Turtle and N-Triples:


Viewing and editing resources

Viewing and Adding

To view a resource in the repository, go to Data -> View resource and enter the URI of a resource or navigate to it by clicking the SPARQL results links.


Viewing resources provides an easy way to see triples where a given URI is the subject, predicate or object.


Even when the resource is not in the database, you can still add it from the resource view.

_images/resourceFindNew.png _images/resourceNew.png

Here, you can create as many triples as you need for it, using the resource edit. To add a triple, fill in the necessary fields and click the tick, next to the last one.


To view the new statements in TriG, click the View TriG button.

_images/resourceViewTriG-1.png _images/resourceViewTriG-2.png

When ready, save the new resource to the repository.


Once you open a resource in View resource, you can also edit it. Click the edit icon next to the resource namespace and add, change or delete the properties of this resource.



You can not change or delete the inferred statements.

Namespace management

You can view and manipulate the RDF namespaces for the active repository from the view accessible through Data -> Namespaces. If you only have read access to the repository, you cannot add or delete namespaces but only view them.


Context view

A list of the contexts (graphs) in a repository can be seen in the Contexts view available through Data/Contexts. You can use it for the following tasks:

  • to see a reference of available contexts in a repository (use the filter to narrow down the list if you have many contexts);
  • to inspect triples in a context by clicking it;
  • to drop a context by clicking the bucket icon.

Connector management

The Connector manager lets you create, view and delete GraphDB Connector instances. It provides a handy form-based editor for Connector configurations. Click Data -> Connector management to access it.

Creating connectors

To create a new Connector configuration, click the New Connector button in the tab of the respective Connector type you want to create. Once you fill the configuration form, you can either execute the CREATE statement from the form by clicking OK or only view it by clicking View SPARQL Query. If you view the query, you can also copy it to execute manually or integrate in automation scripts.

Viewing connectors

Existing Connector instances will show under Existing connectors (below the New Connector button). Click the name of an instance to view its configuration and SPARQL query, or click the repair and delete icons to perform those operations.


Users and access management


User and access checks are disabled by default. If you want to enable them, go to Admin -> Users and Access and click the Security slider above the user table.

Users and access management is under Admin -> Users and Access from the menu. The page displays a list of users and the number of repositories they have access to. It is also possible to disable the security for the entire GraphDB Workbench instance by clicking Disable/Enable. When security is disabled, everyone has full access to the repositories and the admin functionality.

From here, you can create new users, delete existing users or edit user properties, including setting their role and the read/write permission for each repository. The password can also be changed here.


User roles:

  • User - a user who can read and write according to his permissions for each repository;
  • Admin - a user with full access, including creating, editing, deleting users.

Since GraphDB 6.4, repository permissions can be bound to a specific location only, or to all locations (“*” in the location list) to mimic the behaviour of pre-6.4 versions.


Login and default credentials:

If security is enabled, the first page you will see is the login page.


The default administrator account information is:
username: admin
password: root

It is highly recommended that you change the root password as soon as you log in for the first time. Click your username (admin) in the top right corner to change it.

Free access

Free access is a new feature since GraphDB 6.5. It allows people to access a predefined set of functionality without having to log in. This is especially useful for providing read-only access to a repository.

You can enable free access by going to Admin -> Users and Access and clicking the Free Access slider above the user table. When you enable free access, a dialog box will open and prompt you to select the access rights for free access users. The available permissions are similar to those for authenticated users, e.g., you can provide read or read/write access to one or more repositories.


To use free access, you must have security enabled. The setting will not show if security is disabled.

_images/usersFreeAccess1.png _images/usersFreeAccess2.png


The Workbench in GraphDB 6.4 introduces the Workbench REST API. It can be used to automate various tasks without having to resort to opening the Workbench in a browser and doing them manually.

The REST API calls fall into six major categories:

Security management

Use the security management API to add, edit or remove users and thus integrate Workbench security into an existing system.

Location management

Use the location management API to attach, activate, edit or detach locations.

Repository management

Use the repository management API to add, edit or remove repository into any attached location. Unlike the Sesame API, you can work with multiple remote locations from a single access point. When combined with the location management, it can be used to automate the creation of multiple repositories across your network.

Data import

Use the data import to import data into GraphDB. You can choose between server files and a remote URL.

Saved queries

Use the saved queries API to create, edit or remove saved queries. It is a convenient way to automate the creation of saved queries that are important to your project.

You can find more information about each REST API in Admin -> REST API Documentation, as well as execute them directly from there and see the results.



Known issue: A bug in the swagger angular JavaScript library leads to the following problem - when executing POST queries and the parameter value is JSON, the latter is not sent to the server. In these cases, use curl instead of the swagger UI.

Configuration properties

In addition to the standard GraphDB command line parameters, the GraphDB Workbench can be controlled with the following parameters. They should be of the form -Dparam=value.

Parameter Description


app.cors.enable (deprecated)

Enables cross-origin resource sharing.

Default: false


app.maxConnections (deprecated)

Sets the maximum number of concurrent connections to a GraphDB instance.

Default: 200


app.datadir (deprecated)

Sets the directory where the workbench persistence data will be stored.

Default: ${user.home}/.graphdb-workbench/


impex.dir (deprecated)

Changes the location of the file import folder.

Default: ${user.home}/graphdb-import/


app.maxUploadSize (deprecated)

Sets the maximum upload size for importing local files. The value must be in bytes.

Default: 200 MB


Sets the default language in which to filter results displayed in the resource exploration.

Default: en (English)


Sets the limit for the number of statements displayed in the resource view page.

Default: 100