There are several ways of importing data:

• from local files;
• from files on the server where the workbench is located;
• from a remote URL (with a format extension or by specifying the data format);
• by pasting the RDF data in the Text area tab;
• from a SPARQL construct query directly.

All import methods support asynchronous running of the import tasks, except for the text area import, which is intended for a very fast and simple import.

Note

Currently, only one import task of a type is executed at a time, while the others wait in the queue as pending.

Note

For Local repositories, since the parsing is done by the Workbench, we support interruption and additional settings.
When the location is a remote one, you just send the data to the remote endpoint and the parsing and loading is performed there.

A file name filter is available to narrow down the list if you have many files.

## Import settings¶

The settings for each import are saved so that you can use them, in case you want to re-import a file. They are:

• Base URI - specifies the base URI against which to resolve any relative URIs found in the uploaded data;
• Context - if specified, imports the data into the specific context;
• Preserve BNnode IDs - assigns its own internal blank node identifiers or uses the blank node IDs it finds in the file.
• Fail parsing if datatypes are not recognised - determines whether to fail parsing if datatypes are unknown.
• Verify recognised datatypes - verifies that the values of the datatype properties in the file are valid.
• Normalize recognised datatypes values - indicates whether recognised datatypes need to have their values be normalized.
• Fail parsing if languages are not recognised - determines whether to fail parsing if languages are unknown.
• Verify language based on a given set of definitions for valid languages - determine whether languages tags are to be verified.
• Normalize recognised language tags - indicates whether languages need to be normalized, and to which format they should be normalised.
• Verify URI syntax - controls if URIs should be verified to contain only legal characters.
• Verify relative URIs - controls whether relative URIs are verified.
• Should stop on error - determine whether to ignore non-fatal errors.

## Importing local files¶

Note

The limitation of this method is that it supports files of a limited size. The default is 200MB and it is controlled by the graphdb.workbench.maxUploadSize property. The value is in bytes (-Dgraphdb.workbench.maxUploadSize=20971520).

Loading data from the Local files directly streams the file to the RDF4J’s statements endpoint:

2. When the files appear in the table, either import a file by clicking Import on its line or select multiple files and click Batch import;
3. The import settings modal appears, just in case you want to add additional settings.

## Importing server files¶

The server files import allows you to load files of arbitrary sizes. Its limitation is that the files must be put (symbolic links are supported) in a specific directory. By default, it is \${user.home}/graphdb-import/.

If you want to tweak the directory location, see the graphdb.workbench.importDirectory system property. The directory is scanned recursively and all files with a semantic MIME type are visible in the Server files tab.

## Importing remote content¶

You can import from a URL with RDF data. Each endpoint that returns RDF data may be used.

If the URL has an extension, it is used to detect the correct data type (e.g., http://linkedlifedata.com/resource/umls-concept/C0024117.rdf). Otherwise, you have to provide the Data Format parameter, which is sent as Accept header to the endpoint and then to the import loader.

## Paste and import¶

You can import data by pasting it directly in the Text area tab. This very simple text import sends the data to the Repository Statements Endpoint.

## Import data with an INSERT query¶

You can also insert triples into a graph with an INSERT query in the SPARQL editor.