Quick Start Guide

Starting the database

  1. Download your GraphDB distribution file and unzip it.

  2. Start the GraphDB database and Workbench interfaces in the embedded Tomcat server by executing the startup script located in the root directory:

    startup.bat (Windows)
    ./startup.sh (Linux/Unix/Mac OS)

    The message below appears in your Terminal and the GraphDB Workbench opens up at http://localhost:8080/.

    INFO: Starting ProtocolHandler ["http-bio-8080"]
    Opening web app in default browser


To change the database port number, execute:

startup.bat  -p 9080 (Windows)
./startup.sh -p 9080 (Linux/Unix/Mac OS)


In versions of GraphDB prior to 6.6.3 there is a known issue with the startup.bat file and specifying the port number as shown above will not work. Instead, you have to edit the file and add the port number like this:

java -XX:PermSize=256m -XX:MaxPermSize=256m -jar graphdb-tomcat.jar -p 9080


The maximum memory allowed that can be allocated by the GraphDB’s JVM is controlled by the -Xmx1G parameter. To see how to increase the memory, see Configuring memory.


GraphDB runs in a non-secure mode.
To see how to add password protection to the GraphDB server instance, see Access Rights and Security. The default administrator user name and password are admin/root.

Creating locations and repositories

Data locations group a set of repositories and expose them as Sesame endpoints. They can be initialised as a local file path or a remote server URL, which requires a valid Sesame endpoint. When a local file path is set, the current Java process initialises all repositories locally and they operate in the same memory address space. Each location has a SYSTEM repository containing meta-data about how to initialise the other repositories from the current location.

To create data locations and repositories:

  1. Go to the GraphDB Workbench and navigate to Admin -> Locations and Repositories.

  2. Choose Attach Location and enter a local file system path.

  3. Click the Add button.


    This creates the path where all GraphDB database binaries are created. Alternatively, connect to a remote location exposed via the Sesame API by supplying a valid URL endpoint.

  4. Create a repository with the Create Repository button.

  5. Enter the Repository ID (e.g., worker-node) and leave all other optional configuration settings with their default values.


    For repositories with more than few tens of millions of statements, see Configuring a Repository.

  6. Set the newly created repository as the default repository for this location with the Connect button.



Alternatively, use curl command to perform basic location and repository management through the Workbench REST API.

Loading data

Supported file formats

Serialisation format MIME type File extension prefix
RDF/XML application/rdf+xml .owl / .rdf
N-Triples text/plain .nt
Turtle text/turtle .ttl
N3 text/rdf+n3 .n3
N-Quads text/x-nquads .nq
RDF/JSON application/rdf+json .rj
TriX application/trix .trix
TriG application/x-trig .trig
Sesame Binary RDF application/x-binary-rdf .brf

Loading data through the GraphDB Workbench

To load a local file:

  1. Select Data -> Import.
  2. Open the Local files tab and click the Select files icon to choose the file you want to upload.
  3. Click the Import button.
  4. Enter the import settings in the pop-up window.

Import Settings

  • Base URI: the default prefix for all local names in the file;
  • Context: specifies a graph within the repository;
  • Chunk size: the size of the batch operation; used for very large files (e.g., 10,000 - 100,000 triples per chunk);
  • Retry times: the number of times the workbench will try to upload the chunk before canceling (in case of HTTP error, during the data transfer);
  • Preserve BNnode IDs: when clicked, the parser keeps the blank node ID-s with their original strings.


Chunking a file is optional, but we recommend it for files larger than 200 MB.

  1. Click the Import button.

To load a database server file:

  1. Create a folder named graphdb-import in your user home directory.
  2. Copy all data files you want to load into the GraphDB database to this folder.
  3. Go to the GraphDB Workbench.
  4. Select Data -> Import.
  5. Open the Server files tab.
  6. Select the files you want to import.
  7. Click the Import button.


The file can be loaded only if it is accessible from the local file system (or a network file system mounted locally).



This option works only for local Locations (i.e., the Location is a local file path and not a remote URL), otherwise the file will be posted via the remote Sesame API.

Other ways of loading data:

  • By pasting a data URL in the Remote content tab of the Import page.

  • By pasting data in the Text area tab of the Import page.

  • By executing an INSERT query in the SPARQL -> SPARQL Query page.


Loading data through SPARQL or Sesame API

The GraphDB database also supports a very powerful API with a standard SPARQL or Sesame endpoint to which data can be posted with cURL, a local Java client API or a Sesame console. It is compliant with all standards. It allows every database operation to be executed via a HTTP client request.

  1. Locate the correct GraphDB URL endpoint:

    • select Admin -> Location and Repositories

    • click the link icon next to the repository name

    • copy the repository URL.

  2. Go to the folder where your local data files are.

  3. Execute the script:

    curl -X POST -H "Content-Type:application/x-turtle" -T localfilename.ttl

    where localfilename.ttl is the data file you want to import and http://localhost:8080/repositories/repository-id/statements is the GraphDB URL endpoint of your repository.


    Alternatively, use the full path to your local file.

Loading data through the GraphDB LoadRDF tool

LoadRDF is a low level bulk load tool, which writes directly in the database index structures. It is ultra fast and supports parallel inference. For more information, see the LoadRDF Tool.


Loading data through the GraphDB LoadRDF tool can be performed only if the repository is empty, e.g., the initial loading after the database was down.

Querying data


SPARQL is a SQL-like query language for RDF graph databases with the following types:

  • SELECT - returns tabular results;
  • CONSTRUCT - creates a new RDF graph based on query results;
  • ASK - returns “YES”, if the query has a solution, otherwise “NO”;
  • DESCRIBE - returns RDF data about a resource; useful when you do not know the RDF data structure in the data source;
  • INSERT - inserts triples into a graph;
  • DELETE - deletes triples from a graph.

Querying data through the GraphDB Workbench

  1. Select the repository and click the SPARQL menu tab.

  2. Write your query.

    ## Example query:
    CONSTRUCT {?s ?p ?o}
    WHERE {?s ?p ?o}
    LIMIT 100
  3. Click the Run button.


To get started with your own query, use the sample template provided above the query text box. The editor’s syntax highlighting will guide you in writing your own data READ or WRITE queries. You can also save your favourite query templates for later use or send them via the persistent link using icon.

Querying data programmatically

SPARQL is not only a standard query language, but also a protocol for communicating with RDF databases. GraphDB stays compliant with the protocol specification and allows querying data with standard HTTP requests.

Execute the example query with a HTTP GET request:

curl -G -H "Accept:application/x-trig"
  -d query=CONSTRUCT+%7B%3Fs+%3Fp+%3Fo%7D+WHERE+%7B%3Fs+%3Fp+%3Fo%7D+LIMIT+10

Execute the example query with a POST operation:

curl -X POST --data-binary @file.sparql -H "Accept: application/rdf+xml"
  -H "Content-type: application/x-www-form-urlencoded"

where, file.sparql contains an encoded query:



For more information how to interact with GraphDB APIs, refer to the Sesame and SPARQL protocols or the Linked Data Platform specifications.

Supported export/download formats

Serialisation format Query type MIME type
XML SELECT, ASK application/sparql-results+xml
JSON SELECT, ASK application/sparql-results+json
CSV SELECT, ASK text/csv
TSV SELECT, ASK text/tsv
JSON CONSTRUCT, DESCRIBE application/rdf+json
JSON-LD CONSTRUCT, DESCRIBE application/ld+json
RDF/XML CONSTRUCT, DESCRIBE application/rdf+xml
N-Quads CONSTRUCT, DESCRIBE text/x-nquads
N-Triples CONSTRUCT, DESCRIBE text/plain
Turtule CONSTRUCT, DESCRIBE text/turtle
TriX CONSTRUCT, DESCRIBE application/trix
Trig CONSTRUCT, DESCRIBE application/x-trig

Additional resources

Community Forum and Evaluation Support: http://stackoverflow.com