Workbench user guide

The Workbench is the default web-based administration interface to GraphDB. It lets you administer GraphDB, as well as load, explore, manage, query and export data.

To access the Workbench, open http://localhost:7200/ in your browser. A summary page is displayed showing the versions of the various GraphDB components, license details, as well as links to the Documentation, Developer Hub and Support page.


All GraphDB Workbench functionalities are organised in three main dropdown menus - Data, SPARQL and Admin, which are also the main chapters in this user guide.

Admin (Administering the Workbench)

Managing locations

Locations represent individual GraphDB servers, where the repository data is stored. They can be local (a directory on the disk) or remote (an end-point URL). Only a single location can be active at a time. Each location has a SYSTEM repository containing meta-data about how to initialise other repositories from the current location.

When started, GraphDB creates GraphDB-HOME/data directory as a default location. You can also attach other locations or edit and detach previously attached ones.

To attach a data location:

  1. Go to Admin -> Locations and Repositories.
  2. Click Attach location.
  3. Enter a location:
    • For local locations, use the absolute path to a directory on the machine running the Workbench;
    • For remote locations, use the URL to the GraphDB web application, e.g.,
      • (Optionally) Specify credentials for the Sesame location (user and password);
      • (Optionally) Add the JMX Connection parameters (host, port and credentials) - this allows you to monitor the resources on the remote location, do query monitoring and manage a GraphDB cluster.


The JMX endpoint is configured by specifying a host and a port. The Workbench will construct a JMX URI of the kind service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi and the remote process has to be configured with compatible JMX settings. For example:<port> -Djava.rmi.server.hostname=<host>

You can attach multiple locations but only one can be active at a given time. The active location is always shown in the navigation bar next to a plug icon.


If you use the Workbench as a SPARQL endpoint, all your queries are sent to a repository in the currently active location. This works well if you do not change the active location. To have endpoints that are always accessible outside the Workbench, we recommend using standalone Workbench and Engine installations, connecting the Workbench to the Engine over a remote location and using the Engine endpoints (i.e., not the ones provided by the Workbench) in any software that executes SPARQL queries.


Managing repositories

To manage repositories, go to Admin -> Locations and Repositories. This opens a list of available repositories and their locations as well as the user’s permissions for each repository.

Creating a repository

To create a new repository, click Create repository. This displays the configuration page for the new repository where a new, unique ID has to be entered. The rest of the parameters are described in the Configuration parameters section of the GraphDB documentation.


Alternatively, you can use a .ttl file that specifies the repository type, ID and configuration parameters. Click the triangle at the edge of the Create repository button and choose File.

Editing a repository

To edit the parameters you specify at repository creation time, click the edit icon next to a repository. Note that you have to restart the relevant GraphDB instance for the changes to take effect.

Deleting a repository

To delete a repository, click the bucket icon. Once a repository is deleted, all data contained in it is irrevocably lost.

Selecting a repository

To connect to a repository, go to Locations and Repositories and click the slider button next to it.


Another way to connect to a repository is by using the dropdown menu in the top right corner. This allows you to easily change the repository while running queries as well as importing and exporting data in other views.


To select your default repository, click on the pin.


Managing users and access

To manage users and access, go to Admin -> Users and Access.


User and access checks are disabled by default. If you want to enable them, click the Security slider above the user table. When security is disabled, everyone has full access to the repositories and the admin functionality.

The page displays a list of users and the number of repositories they have access to. From here, you can create new users, delete existing users or edit user properties, including setting their role and the read/write permission for each repository. The password can also be changed here.


User roles

  • User - a user who can read and write according to his permissions for each repository;
  • Admin - a user with full access, including creating, editing and deleting users.

The repository permissions can be bound only to a specific location, or to all locations (“*” in the location list).


Login and default credentials

If security is enabled, the first page you see is the login page.


The default administrator account information is:
username: admin
password: root

It is highly recommended that you change the root password as soon as you log in for the first time. Click your username (admin) in the top right corner to change it.

Free access

To allow people to access a predefined set of functionalities without having to log in, go to Admin -> Users and Access and click the Free Access slider above the user table. A dialog box opens and prompts you to select the access rights for free access users. The available permissions are similar to those for authenticated users, e.g., you can provide read or read/write access to one or more repositories.


Free access is especially useful for providing read-only access to a repository.


To use free access, you must have security enabled. The settings do not show if security is disabled.

_images/usersFreeAccess1.png _images/usersFreeAccess2.png

Query monitoring and interruption

To track and interrupt long running queries, go to Admin -> Query monitoring.


If you are connected to a remote location, you need to have the JMX configured properly. See how in Managing locations.


To interrupt long running queries, click the Abort query button.


The Query Monitoring view is based on the JMX console. See a description of the different attributes of the tracked query string.

Resource monitoring

Monitoring the GraphDB internal state and behaviour is very important for identifying issues that need the administrator’s attention.

System information

Application info


JVM Arguments

The JVM arguments that can impact the server performance.


Configuration properties

In addition to the standard GraphDB command line parameters, the GraphDB Workbench can be controlled with the following parameters (they should be of the form -Dparam=value):

Parameter Description


app.cors.enable (deprecated)

Enables cross-origin resource sharing.

Default: false


app.maxConnections (deprecated)

Sets the maximum number of concurrent connections to a GraphDB instance.

Default: 200


app.datadir (deprecated)

Sets the directory where the workbench persistence data will be stored.

Default: ${user.home}/.graphdb-workbench/


impex.dir (deprecated)

Changes the location of the file import folder.

Default: ${user.home}/graphdb-import/


app.maxUploadSize (deprecated)

Sets the maximum upload size for importing local files. The value must be in bytes.

Default: 200 MB


Sets the default language in which to filter results displayed in the resource exploration.

Default: en (English)


The Workbench REST API can be used to automate various tasks without having to resort to opening the Workbench in a browser and doing them manually.

The REST API calls fall into six major categories:

Security management

Use the security management API to add, edit or remove users, thus integrating the Workbench security into an existing system.

Location management

Use the location management API to attach, activate, edit, or detach locations.

Repository management

Use the repository management API to add, edit or remove a repository to/from any attached location. Unlike the Sesame API, you can work with multiple remote locations from a single access point. When combined with the location management, it can be used to automate the creation of multiple repositories across your network.

Data import

Use the data import API to import data in GraphDB. You can choose between server files and a remote URL.

Saved queries

Use the saved queries API to create, edit or remove saved queries. It is a convenient way to automate the creation of saved queries that are important to your project.

You can find more information about each REST API in Admin -> REST API Documentation, as well as execute them directly from there and see the results.



Known issue: A bug in the swagger angular JavaScript library leads to the following problem: when executing POST queries and the parameter value is JSON, the latter is not sent to the server. In these cases, use curl instead of the swagger UI.

Data (Working with data)

Importing data

To import data in the currently selected repository, go to Data -> Import.

There are several ways of importing data:

  • from local files;
  • from files on the server where the workbench is located;
  • from a remote URL (with a format extension or by specifying the data format);
  • by pasting the RDF data in the Text area tab;
  • from a SPARQL construct query directly.

All import methods support asynchronous running of the import tasks, except for the text area import, which is intended for a very fast and simple import.


Currently, only one import task of a type is executed at a time, while the others wait in the queue as pending.


For Local repositories, since the parsing is done by the Workbench, we support interruption and additional settings.
When the location is a remote one, you just send the data to the remote endpoint and the parsing and loading is performed there.

A file name filter is available to narrow down the list if you have many files.

Import settings

The settings for each import are saved so that you can use them, in case you want to re-import a file. They are:

  • Base URI - specifies the base URI against which to resolve any relative URIs found in the uploaded data (see the Sesame System documentation);
  • Context - if specified, imports the data into the specific context;
  • Chunk size - the number of statements to commit in one chunk. If a chunk fails, the import operations are interrupted and the imported statements are not rollbacked. The default is no chunking. When there is no chunking, all statements are loaded in one transaction.
  • Retry times - how many times to retry the commit if it fails.
  • Preserve BNode IDs - assigns its own internal blank node identifiers or uses the blank node IDs it finds in the file.

Importing local files


The limitation of this method is that it supports files of a limited size. The default is 200MB and it is controlled by the graphdb.workbench.maxUploadSize property. The value is in bytes (-Dgraphdb.workbench.maxUploadSize=20971520).

Loading data from the Local files directly streams the file to the Sesame’s statements endpoint:

  1. Click the icon to browse files for uploading;
  2. When the files appear in the table, either import a file by clicking Import on its line or select multiple files and click Batch import;
  3. The import settings modal appears, just in case you want to add additional settings.

Importing server files

The server files import allows you to load files of arbitrary sizes. Its limitation is that the files must be put (symbolic links are supported) in a specific directory. By default, it is ${user.home}/graphdb-import/.

If you want to tweak the directory location, see the graphdb.workbench.importDirectory system property. The directory is scanned recursively and all files with a semantic MIME type are visible in the Server files tab.

Importing remote content

You can import from a URL with RDF data. Each endpoint that returns RDF data may be used.


If the URL has an extension, it is used to detect the correct data type (e.g., Otherwise, you have to provide the Data Format parameter, which is sent as Accept header to the endpoint and then to the import loader.

Paste and import

You can import data by pasting it directly in the Text area tab. This very simple text import sends the data to the Repository Statements Endpoint.


SPARQL editor

You can also insert triples into a graph with an INSERT query in the SPARQL editor.


Exporting data

Data can be exported in several ways and formats.

Exporting an entire repository or individual graphs

Go to Data -> Export and decide whether you want to export the whole repository (in several different formats) or specific named graphs (in the same variety of formats). Click the appropriate format and the download starts:


Exporting query results

The SPARQL query results can also be exported from the SPARQL view by clicking Download As.

Exporting resources

From the resource description page, export the RDF triples that make up the resource description to JSON, JSON-LD, RDF-XML, N3/Turtle and N-Triples:


Managing namespaces

To view and manipulate the RDF namespaces for the active repository, go to Data -> Namespaces. If you only have read access to the repository, you cannot add or delete namespaces but only view them.


Context view

For a list of the contexts (graphs) in a repository, go to Data -> Contexts. On this page, you can:

  • see a reference of available contexts in a repository (use the filter to narrow down the list if you have many contexts);
  • inspect triples in a context by clicking it;
  • drop a context by clicking the bucket icon.

Connector management

To access the Connector manager, go to Data -> Connector management. On this page, you can create, view and delete GraphDB Connector instances with a handy form-based editor for Connector configurations.

Creating connectors

To create a new Connector configuration, click the New Connector button in the tab of the respective Connector type you want to create. Once you fill in the configuration form, you can either execute the CREATE statement from the form by clicking OK or only view it by clicking View SPARQL Query. If you view the query, you can also copy it to execute manually or integrate in automation scripts.

Viewing connectors

Existing Connector instances show under Existing connectors (below the New Connector button). Click the name of an instance to view its configuration and SPARQL query, or click the repair / delete icons to perform these operations.


Viewing and editing resources

Viewing and adding

To view a resource in the repository, go to Data -> View resource and enter the URI of a resource or navigate to it by clicking the SPARQL results links.


Viewing resources provides an easy way to see triples where a given URI is the subject, predicate or object.


Even when the resource is not in the database, you can still add it from the resource view.

_images/resourceFindNew.png _images/resourceNew.png

Here, you can create as many triples as you need for it, using the resource edit. To add a triple, fill in the necessary fields and click the tick, next to the last one.


To view the new statements in TriG, click the View TriG button.

_images/resourceViewTriG-1.png _images/resourceViewTriG-2.png

When ready, save the new resource to the repository.


Once you open a resource in View resource, you can also edit it. Click the edit icon next to the resource namespace and add, change or delete the properties of this resource.



You cannot change or delete the inferred statements.

Autocomplete index

The Autocomplete Index offers suggestions for the URIs local names in the SPARQL editor and the View Resource page.

It is disabled by default. Go to Data -> Autocomplete Index to enable it. GraphDBD indexes all URIs in the repository by splitting their local names into words, for example, subPropertyOf is split into sub+Property+Of. This way, when you search for a word, the Autocomplete finds URIs with local names containing the symbols that you typed in the editor.



If you get strange results and you think the index was broken, use the Build Now button.

If you try to use autocompletion before it is enabled, a tooltip warns you that the Autocomplete index is off and provides a link for building the index.


Autocomplete in the SPARQL editor

To start autocompletion in the SPARQL editor, use the shortcuts Alt+Enter / Ctrl+Space / Cmd+Space depending on your OS and the way you have set up your shortcuts. You can use autocompletion to:

  • search in all URIs

  • search only for URIs that start with a certain prefix

  • search for more than one word


    Just start writing the words one after another without spaces, e.g., “pngOnto”, and the index smartly splits them.

  • search for numbers


Autocomplete in the View resource

To use the autocompletion feature to find a resource, go to Admin -> View resource and start typing.


Class hierarchy

To explore your data, navigate to Data -> Class hierarchy. You can see a diagram depicting the hierarchy of the imported RDF classes by the number of instances. The biggest circles are the parent classes and the nested ones are their children.


If your data has no ontology (hierarchy), the RDF classes is visualised as separate circles, instead of nested ones.


Explore your data - different actions

  • To see what classes each parent has, hover over the nested circles.

  • To explore a given class, click its circle. The selected class is highlighted with a dashed line and a side panel with its instances opens for further exploration. For each RDF class you can see its local name, URI and a list of its first 1000 class instances. The class instances are represented by their URIs, which when clicked lead to another view, where you can further explore their metadata.


    The side panel includes the following:

    • Local name;
    • URI (Press Ctrl+C / Cmd+C to copy to clipboard and Enter to close);
    • Domain-Range Graph button;
    • Class instances count;
    • Scrollable list of the first 1000 class instances;
    • View Instances in SPARQL View button. It redirects to the SPARQL view and executes an auto-generated query that lists all class instances without LIMIT.
  • To go to the Domain-Range Graph diagram, double click a class circle or the Domain-Range Graph button from the side panel.

  • To explore an instance, click its URI from the side panel.

  • To adjust the number of classes displayed, drag the slider on the left-hand side of the screen. Classes are sorted by the maximum instance count and the diagram displays only the current slider value.

  • To administer your data view, use the toolbar options on the right-hand side of the screen.

    • To see only the class labels, click the Hide/Show Prefixes. You can still view the prefixes when you hover over the class that interests you.
    • To zoom out of a particular class, click the Focus diagram home icon.
    • To reload the data on the diagram, click the Reload diagram icon. This is recommended when you have updated the data in your repository or you experience some strange behaviour, for example you cannot see a given class.
    • To export the diagram as an .svg image, click the Export Diagram download icon.

Domain-range graph

To see all properties of a given class as well as their domain and range, double click its class circle or the Domain-Range Graph button from the side panel. The RDF Domain-Range Graph view opens, enabling you to further explore the class connectedness by clicking the green nodes (object property class).

  • To administer your graph view, use the toolbar options on the right-hand side of the screen.

    • To go back to your class in the RDF Class hierarchy, click the Back to Class hierarchy diagram button.
    • To export the diagram as an .svg image, click the Export Diagram download icon.

Class Relationships

To explore the relationships between the classes, navigate to Data -> Class relationships. You can see a complicated diagram which by default is showing only the top relationships. Each of them is a bundle of links between the individual instances of two classes. Each link is an RDF statement where the subject is an instance of one class, the object is an instance of another class, and the link is the predicate. Depending on the number of links between the instances of two classes, the bundle can be thicker or thinner and gets the color of the class with more incoming links. These links can be in both directions. Note that contrary to the Class hierarchy, the Class relationships diagram is based on the real statements between classes, not on the Ontology schema.

In the example below, you can see that Person is the class with the biggest number of links. It is very strongly connected to Feature and City and most of the links are from Person. Also, you notice that all classes have many outgoing links to opengis:_Feature.


Left to the diagram you can see a list of all classes ordered by the links they have and an indicator of the direction of the links. Click on it to see the actual classes this class is linked to, again ordered by the number of links with the actual number shown. Also, the direction of the links is displayed.


Use the list of classes to control which classes to see in the diagram with the add/remove icons next to each class. Remove all classes by the rubber icon. The green background of a class indicates that the class is present in the diagram. You see that Person has much more connections to City than Village.


For each two classes in the diagram you can find the top predicates that connect them, again ordered and with the number of statements of this predicate and instances of these classes. Person is linked to City by the birthPlace and deathPlace predicates.


All these statistics are built on top of the whole repository so when you have a lot of data, the building of the diagram may be very slow. Please, be patient in that case.

SPARQL (Querying data)

To manage and query your data, click the SPARQL menu. The SPARQL view integrates the YASGUI query editor plus some additional features, which are described below.


SPARQL is a SQL-like query language for RDF graph databases with the following types:

  • SELECT - returns tabular results;
  • CONSTRUCT - creates a new RDF graph based on query results;
  • ASK - returns “YES”, if the query has a solution, otherwise “NO”;
  • DESCRIBE - returns RDF data about a resource; useful when you do not know the RDF data structure in the data source;
  • INSERT - inserts triples into a graph;
  • DELETE - deletes triples from a graph.

The SPARQL editor offers two viewing/editing modes - horizontal and vertical.


Use the vertical mode switch to show the editor and the results next to each other, which is particularly useful on wide screen. Click the switch again to return to horizontal mode.


Both in horizontal and vertical mode, you can also hide the editor or the results to focus on query editing or result viewing. Click the buttons Editor only, Editor and results or Results only to switch between the different modes.

  1. Manage your data by writing queries in the text area. It offers syntax highlighting and namespace autocompletion for easy reading and writing.


    To add/remove namespaces, go to Data -> Namespaces.

  2. Include or exclude inferred statements in the results by clicking the >>-like icon. When inferred statements are included, both elements of the arrow icon are the same colour (ON), otherwise the left element is dark and the right one is greyed out (OFF).

  3. Execute the query by clicking the Run button or use Ctrl/Cmd + Enter.


    You can find other useful shortcuts in the keyboard shortcuts link in the lower right corner of the SPARQL editor.

  4. The results can be viewed in different formats according to the type of the query. By default, they are displayed as a table. Other options are Raw response, Pivot table and Google Charts. You can order the results by column values and filter them by table values. The total number of results and the query execution time are displayed in the query results header.


    The total number of results is obtained by an async request with a default-graph-uri parameter and the value

  5. Navigate through all results by using pagination (SPARQL view can only show a limited number of results at a time). Each page executes the query again with query limit and offset for SELECT queries. For graph queries (CONSTRUCT and DESCRIBE), all results are fetched by the server and only the page of interest is gathered from the results iterator and sent to the client.

  6. The query results are limited to 1000, since your browser cannot handle an infinite number of results. Obtain all results by using Download As and select the required format for the data (JSON, XML, CSV, TSV and Binary RDF for Select queries and all RDF formats for Graph query results).

  7. Use the editor’s tabs to keep several queries opened, while working with GraphDB. The queries are saved in your browser’s local storage, so you can return to them even after switching views.

  8. Save your query with the Create saved query icon.

  9. Access existing saved queries from the Show saved queries icon (saved queries are persisted on the server running the Workbench).

  10. Copy your query as a URL by clicking the Get URL to current query icon. For a longer query, first save it and then get a link to the saved query by opening the saved queries list and clicking the respective Get URL to query icon.