Loading data using the Workbench¶
What’s in this document?
There are several ways of importing data:
- from local files;
- from files on the server where the workbench is located;
- from a remote URL (with a format extension or by specifying the data format);
- by pasting the RDF data in the Text area tab;
- from a SPARQL construct query directly.
All import methods support asynchronous running of the import tasks, except for the text area import, which is intended for a very fast and simple import.
Note
Currently, only one import task of a type is executed at a time, while the others wait in the queue as pending.
Note
A file name filter is available to narrow down the list if you have many files.
Import settings¶
The settings for each import are saved so that you can use them, in case you want to re-import a file. They are:
Base IRI - specifies the base IRI against which to resolve any relative IRIs found in the uploaded data. When data does not contain relative IRIs this field may be left empty.
Target graphs - when specified, imports the data into one or more graphs. Some RDF formats may specify graphs, while others do not support that. The latter are treated as if they specify the default graph.
- From data - Imports data into the graph(s) specified by the data source.
- The default graph - Imports all data into the default graph.
- Named graph - Imports everything into a user-specified named graph.
Enable replacement of existing data - Enable this to replace the data in one or more graphs with the imported data.
Replaced graph(s) - All specified graphs will be cleared before the import is run. If a graph ends in *, it will be treated as a prefix matching all named graphs starting with that prefix excluding the *. This option provides the most flexibility when the target graphs are determined from data.
I understand that data in the replaced graphs will be cleared before importing new data - this option must be checked when the data replacement is enabled.
Preserve BNnode IDs - assigns its own internal blank node identifiers or uses the blank node IDs it finds in the file.
Fail parsing if datatypes are not recognised - determines whether to fail parsing if datatypes are unknown.
Verify recognised datatypes - verifies that the values of the datatype properties in the file are valid.
Normalize recognised datatypes values - indicates whether recognised datatypes need to have their values be normalized.
Fail parsing if languages are not recognised - determines whether to fail parsing if languages are unknown.
Verify language based on a given set of definitions for valid languages - determine whether languages tags are to be verified.
Normalize recognised language tags - indicates whether languages need to be normalized, and to which format they should be normalised.
Verify URI syntax - controls if URIs should be verified to contain only legal characters.
Verify relative URIs - controls whether relative URIs are verified.
Should stop on error - determine whether to ignore non-fatal errors.
Note
Import without changing settings will import selected files or folders using their saved settings or default ones.

Importing local files¶
Upload RDF files allows you to select, configure and import data from various formats.
Note
The limitation of this method is that it supports files of a limited
size. The default is 200MB
and it is controlled by the
graphdb.workbench.maxUploadSize
property. The value is in
bytes (-Dgraphdb.workbench.maxUploadSize=20971520
).
Loading data from the Local files
directly streams the file to the
RDF4J’s statements endpoint:
- Click the icon to browse files for uploading;
- When the files appear in the table, either import a file by clicking Import on its line or select multiple files and click Import from the header;
- The import settings modal appears, just in case you want to add additional settings.

Importing server files¶
The server files import allows you to load files of arbitrary sizes. Its
limitation is that the files must be put (symbolic links are supported)
in a specific directory. By default, it is
${user.home}/graphdb-import/
.
If you want to tweak the directory location, see the
graphdb.workbench.importDirectory
system property. The directory
is scanned recursively and all files with a semantic MIME
type are visible in the Server files tab.
Importing remote content¶
You can import from a URL with RDF data. Each endpoint that returns RDF data may be used.

If the URL has an extension, it is used to detect the correct data type (e.g., http://linkedlifedata.com/resource/umls-concept/C0024117.rdf). Otherwise, you have to provide the Data Format parameter, which is sent as Accept header to the endpoint and then to the import loader.
Importing RDF data from a text snippet¶
You can import data by typing or pasting it directly in the Text area control. This very simple text import sends the data to the Repository Statements Endpoint.

Import data with an INSERT query¶
You can also insert triples into a graph with an INSERT
query in the SPARQL editor.
