Lucene GraphDB Connector¶
What’s in this document?
Overview and features¶
The GraphDB Connectors provide extremely fast normal and faceted (aggregation) searches, typically implemented by an external component or a service such as Lucene but have the additional benefit of staying automatically up-to-date with the GraphDB repository data.
Note
GraphDB supports full-text search options as well.
The Connectors provide synchronization at the entity level, where an entity is defined as having a unique identifier (a IRI) and a set of properties and property values. In terms of RDF, this corresponds to a set of triples that have the same subject. In addition to simple properties (defined by a single triple), the Connectors support property chains. A property chain is defined as a sequence of triples where each triple’s object is the subject of the following triple.
The main features of the GraphDB Connectors are:
maintaining an index that is always in sync with the data stored in GraphDB;
multiple independent instances per repository;
the entities for synchronization are defined by:
a list of fields (on the Lucene side) and property chains (on the GraphDB side) whose values will be synchronized;
a list of
rdf:type
’s of the entities for synchronization;a list of languages for synchronization (the default is all languages);
additional filtering by property and value.
full-text search using native Lucene queries;
snippet extraction: highlighting of search terms in the search result;
faceted search;
sorting by any preconfigured field;
paging of results using
offset
andlimit
;custom mapping of RDF types to Lucene types;
specifying which Lucene analyzer to use (the default is Lucene’s
StandardAnalyzer
);stripping HTML/XML tags in literals (the default is not to strip markup);
boosting an entity by the numeric value of one or more predicates;
custom scoring expressions at query time to evaluate a total score based on Lucene score and entity boost.
Each feature is described in detail below.
Usage¶
All interactions with the Lucene GraphDB Connector shall be done through SPARQL queries.
There are three types of SPARQL queries:
INSERT
for creating, updating, and deleting connector instances;SELECT
for listing connector instances and querying their configuration parameters;INSERT
/SELECT
for storing and querying data as part of the normal GraphDB data workflow.
In general, this corresponds to INSERT
that adds or modifies data, and
to SELECT
that queries existing data.
Each connector implementation defines its own IRI prefix to distinguish
it from other connectors. For the Lucene GraphDB Connector, this is
http://www.ontotext.com/connectors/lucene#
. Each command or predicate
executed by the connector uses this prefix, e.g.,
http://www.ontotext.com/connectors/lucene#createConnector
to create a
connector instance for Lucene.
Individual instances of a connector are distinguished by unique names
that are also IRIs. They have their own prefix to avoid clashing with
any of the command predicates. For Lucene, the instance prefix is
http://www.ontotext.com/connectors/lucene/instance#
.
- Sample data
All examples use the following
sample data
that describes five fictitious wines: Yoyowine, Franvino, Noirette, Blanquito and Rozova as well as the grape varieties required to make these wines. The minimum required ruleset level in GraphDB is RDFS.@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . @prefix wine: <http://www.ontotext.com/example/wine#> . wine:RedWine rdfs:subClassOf wine:Wine . wine:WhiteWine rdfs:subClassOf wine:Wine . wine:RoseWine rdfs:subClassOf wine:Wine . wine:Merlo rdf:type wine:Grape ; rdfs:label "Merlo" . wine:CabernetSauvignon rdf:type wine:Grape ; rdfs:label "Cabernet Sauvignon" . wine:CabernetFranc rdf:type wine:Grape ; rdfs:label "Cabernet Franc" . wine:PinotNoir rdf:type wine:Grape ; rdfs:label "Pinot Noir" . wine:Chardonnay rdf:type wine:Grape ; rdfs:label "Chardonnay" . wine:Yoyowine rdf:type wine:RedWine ; wine:madeFromGrape wine:CabernetSauvignon ; wine:hasSugar "dry" ; wine:hasYear "2013"^^xsd:integer . wine:Franvino rdf:type wine:RedWine ; wine:madeFromGrape wine:Merlo ; wine:madeFromGrape wine:CabernetFranc ; wine:hasSugar "dry" ; wine:hasYear "2012"^^xsd:integer . wine:Noirette rdf:type wine:RedWine ; wine:madeFromGrape wine:PinotNoir ; wine:hasSugar "medium" ; wine:hasYear "2012"^^xsd:integer . wine:Blanquito rdf:type wine:WhiteWine ; wine:madeFromGrape wine:Chardonnay ; wine:hasSugar "dry" ; wine:hasYear "2012"^^xsd:integer . wine:Rozova rdf:type wine:RoseWine ; wine:madeFromGrape wine:PinotNoir ; wine:hasSugar "medium" ; wine:hasYear "2013"^^xsd:integer .
Setup and maintenance¶
- Third-party component versions
This version of the Lucene GraphDB Connector uses Lucene version 8.11.1.
Creating a connector instance¶
Creating a connector instance is done by sending a SPARQL query with the following configuration data:
the name of the connector instance (e.g.,
my_index
);classes to synchronize;
properties to synchronize.
The configuration data has to be provided as a JSON string representation and passed together with the create command.
You can create connectors via a Workbench dialog or by using a SPARQL update query (create command).
If you create the connector via the Workbench, no matter which way you use, you will be presented with a pop-up screen showing you the connector creation progress.
Using the Workbench¶
Go to
.Click New Connector in the tab of the respective Connector type you want to create.
Fill out the configuration form.
Execute the
CREATE
statement from the form by clicking OK. Alternatively, you can view its SPARQL query by clicking View SPARQL Query, and then copy it to execute it manually or integrate it in automation scripts.
Using the create command¶
The create command is triggered by a SPARQL INSERT
with the
luc:createConnector
predicate, e.g., it creates a connector instance
called my_index
, which synchronizes the wines from the sample data
above.
To be able to use newlines and quotes without the need for escaping,
here we use SPARQL’s multi-line string delimiter consisting of 3 apostrophes: '''...'''
.
You can also use 3 quotes instead: """..."""
.
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
INSERT DATA {
luc-index:my_index luc:createConnector '''
{
"types": [
"http://www.ontotext.com/example/wine#Wine"
],
"fields": [
{
"fieldName": "grape",
"propertyChain": [
"http://www.ontotext.com/example/wine#madeFromGrape",
"http://www.w3.org/2000/01/rdf-schema#label"
]
},
{
"fieldName": "sugar",
"propertyChain": [
"http://www.ontotext.com/example/wine#hasSugar"
],
"analyzed": false,
"multivalued": false
},
{
"fieldName": "year",
"propertyChain": [
"http://www.ontotext.com/example/wine#hasYear"
],
"analyzed": false
}
]
}
''' .
}
The above command creates a new Lucene connector instance.
The "types"
key defines the RDF type of the entities to synchronize and,
in the example, it is only entities of the type http://www.ontotext.com/example/wine#Wine
(and its subtypes if RDFS or higher-level reasoning is enabled). The "fields"
key defines the mapping from RDF to
Lucene. The basic building block is the property chain, i.e., a sequence
of RDF properties where the object of each property is the subject of
the following property. In the example, three bits of information are
mapped - the grape the wines are made of, sugar content, and year. Each
chain is assigned a short and convenient field name: “grape”, “sugar”,
and “year”. The field names are later used in the queries.
The field grape
is an example of a property chain composed of more than one
property. First, we take the wine’s madeFromGrape
property, the object
of which is an instance of the type Grape, and then we take the
rdfs:label
of this instance. The fields sugar
and year
are both composed of a
single property that links the value directly to the wine.
The fields sugar
and year
contain discrete values, such as medium, dry, 2012, 2013, and thus it is best
to specify the option analyzed: false
as well. See analyzed
in Defining fields
for more information.
Dropping a connector instance¶
Dropping (deleting) a connector instance removes all references to its external store from GraphDB, as well as all Lucene files associated with it.
The drop command is triggered by a SPARQL INSERT
with the
dropConnector
predicate where the name of the connector instance has
to be in the subject position, e.g., this removes the connector
my_index
:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
INSERT DATA {
luc-index:my_index luc:dropConnector [] .
}
You can also force drop a connector in case a normal delete does not work. The force delete will remove the connector even if part of the operation fails. Go to Force delete in the dialog box.
where you will see the already existing connectors that you have created. Click the delete icon, and check
Retrieving the create options for a connector instance¶
You can view the options string that was used to create a particular connector instance with the following query:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?createString {
luc-index:my_index luc:listOptionValues ?createString .
}
Listing available connector instances¶
In the Connectors management view¶
Existing Connector instances are shown below the New Connector button. Click the name of an instance to view its configuration and SPARQL query, or click the repair / delete icons to perform these operations. Click the copy icon to copy the connector definition query to your clipboard.

With a SPARQL query¶
Listing connector instances returns all previously created instances. It
is a SELECT
query with the listConnectors
predicate:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
SELECT ?cntUri ?cntStr {
?cntUri luc:listConnectors ?cntStr .
}
?cntUri
is bound to the prefixed IRI of the connector instance that
was used during creation, e.g., http://www.ontotext.com/connectors/lucene/instance#my_index>
,
while ?cntStr
is bound to a string, representing the part after the
prefix, e.g., "my_index"
.
Instance status check¶
The internal state of each connector instance can be queried using a
SELECT
query and the connectorStatus
predicate:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
SELECT ?cntUri ?cntStatus {
?cntUri luc:connectorStatus ?cntStatus .
}
?cntUri
is bound to the prefixed IRI of the connector instance,
while ?cntStatus
is bound to a string representation of the status
of the connector represented by this IRI. The status is key-value based.
Working with data¶
Adding, updating, and deleting data¶
From the user point of view, all synchronization happens transparently
without using any additional predicates or naming a specific store
explicitly, i.e., you must simply execute standard SPARQL
INSERT
/DELETE
queries. This is achieved by intercepting all changes in
the plugin and determining which Lucene documents need to be updated.
Simple queries¶
Once a connector instance has been created, it is possible to query data
from it through SPARQL. For each matching Lucene document, the
connector instance returns the document subject. In its simplest form,
querying is achieved by using a SELECT
and providing the
Lucene query as the object of the luc:query
predicate:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?entity {
?search a luc-index:my_index ;
luc:query "grape:cabernet" ;
luc:entities ?entity .
}
The result binds ?entity
to the two wines made from grapes that have
“cabernet” in their name, namely :Yoyowine
and :Franvino
.
Note
You must use the field names you chose when you created the connector instance. They can be identical to the property IRIs but you must escape any special characters according to what Lucene expects.
Get a query instance of the requested connector instance by using the RDF notation
"X a Y" (= X rdf:type Y)
, whereX
is a variable andY
is a connector instance IRI.X
is bound to a query instance of the connector instance.Assign a query to the query instance by using the system predicate
luc:query
.Request the matching entities through the
luc:entities
predicate.
It is also possible to provide per query search options by using one or more option predicates. The option predicates are described in detail below.
Combining Lucene results with GraphDB data¶
The bound ?entity
can be used in other SPARQL triples in order to build
complex queries that join to or fetch additional data from GraphDB, for example, to
see the actual grapes in the matching wines as well as the year they
were made:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
PREFIX wine: <http://www.ontotext.com/example/wine#>
SELECT ?entity ?grape ?year {
?search a luc-index:my_index ;
luc:query "grape:cabernet" ;
luc:entities ?entity .
?entity wine:madeFromGrape ?grape .
?entity wine:hasYear ?year
}
The result looks like this:

Note
:Franvino
is returned twice because it is made from two
different grapes, both of which are returned.
Entity match score¶
It is possible to access the match score returned by Lucene
with the score
predicate. As each entity has its own score, the predicate
should come at the entity level. For example:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?entity ?score {
?search a luc-index:my_index ;
luc:query "grape:cabernet" ;
luc:entities ?entity .
?entity luc:score ?score
}
The result looks like this but the actual score might be different as it depends on the specific Lucene version:

Basic facet queries¶
Consider the sample wine data and the my_index
connector instance
described previously. You can also query facets using the same instance:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?facetName ?facetValue ?facetCount WHERE {
# Note empty query is allowed and will just match all documents, hence no :query
?r a luc-index:my_index ;
luc:facetFields "year,sugar" ;
luc:facets [
luc:facetName ?facetName;
luc:facetValue ?facetValue;
luc:facetCount ?facetCount
]
}
It is important to specify the facet fields by using the facetFields
predicate. Its value is a simple comma-delimited list of field names. In
order to get the faceted results, use the luc:facets
predicate. As each
facet has three components (name, value and count), the luc:facets
predicate returns multiple nodes that can be used
to access the individual values for each component
through the predicates facetName
, facetValue
, and facetCount
.
The resulting bindings look like the following:

You can easily see that there are three wines produced in 2012 and two in 2013. You also see that three of the wines are dry, while two are medium. However, it is not necessarily true that the three wines produced in 2012 are the same as the three dry wines as each facet is computed independently.
Sorting¶
It is possible to sort the entities returned by a connector query
according to one or more fields. Sorting is achieved by the orderBy
predicate the value of which is a comma-delimited list of fields. Each
field can be prefixed with a minus to indicate sorting in descending
order. For example:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
PREFIX wine: <http://www.ontotext.com/example/wine#>
SELECT ?entity ?sugar{
?search a luc-index:my_index ;
luc:query "year:2013" ;
luc:orderBy "-sugar" ;
luc:entities ?entity.
?entity wine:hasSugar ?sugar
}
The result contains wines produced in 2013 sorted according to their sugar content in descending order:

By default, entities are sorted according to their matching score in descending order.
Note
If you join the entity from the connector query to other
triples stored in GraphDB, GraphDB might scramble the order. To
remedy this, use ORDER BY
from SPARQL.
Tip
Sorting by an analyzed textual field works but might produce
unexpected results. Analyzed textual fields are composed of tokens
and sorting uses the least (in the lexicographical sense) token. For
example, “North America” will be sorted before “Europe” because the
token “america” is lexicographically smaller than the token
“europe”. If you need to sort by a textual field and still do
full-text search on it, it is best to create a copy of the field
with the setting "analyzed": false
. For more information, see
Copy fields.
Note
Unlike Lucene 4, which was used in GraphDB 6.x, Lucene 5 imposes
an additional requirement on fields used for sorting.
They must be defined with multivalued = false
.
Limit and offset¶
Limit and offset are supported on the Lucene side of the query. This is
achieved through the predicates limit
and offset
. Consider this
example in which an offset of 1
and a limit of 1
are specified:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?entity {
?search a luc-index:my_index ;
luc:query "sugar:dry" ;
luc:offset "1" ;
luc:limit "1" ;
luc:entities ?entity .
}
offset
is counted from 0. The result contains a single wine, Franvino. If you execute the query
without the limit and offset, Franvino will be second in the list:

Note
The specific order in which GraphDB returns the results depends on how Lucene returns the matches, unless sorting is specified.
Snippet extraction¶
Snippet extraction is used for extracting highlighted snippets of text that
match the query. The snippets are accessed through the dedicated
predicate luc:snippets
. It binds a blank node that in turn provides the
actual snippets via the predicates luc:snippetField
and luc:snippetText
.
The predicate snippets must be attached to the entity, as each entity
has a different set of snippets. For example, in a search for Cabernet:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?entity ?snippetField ?snippetText {
?search a luc-index:my_index ;
luc:query "grape:cabernet" ;
luc:entities ?entity .
?entity luc:snippets ?snippet .
?snippet luc:snippetField ?snippetField ;
luc:snippetText ?snippetText .
}
the query returns the two wines made from Cabernet Sauvignon or Cabernet Franc grapes as well as the respective matching fields and snippets:

Note
The actual snippets might be different as this depends on the specific Lucene implementation.
It is possible to tweak how the snippets are collected/composed by using the following option predicates:
luc:snippetSize
- sets the maximum size of the extracted text fragment,250
by default;luc:snippetSpanOpen
- text to insert before the highlighted text,<em>
by default;luc:snippetSpanClose
- text to insert after the highlighted text,</em>
by default.
The option predicates are set on the query instance, much like the
luc:query
predicate.
Total hits¶
You can get the total number of matching Lucene documents (hits) by using the luc:totalHits
predicate, e.g., for the connector instance my_index
and a query that
retrieves all wines made in 2012:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?totalHits {
?r a luc-index:my_index ;
luc:query "year:2012" ;
luc:totalHits ?totalHits .
}
As there are three wines made in 2012, the value 3
(of type xdd:long
)
binds to ?totalHits
.
As you see above, you can omit returning any of the matching entities. This can be useful if there are many hits and you want to calculate pagination parameters.
List of creation parameters¶
The creation parameters define how a connector instance is created by
the luc:createConnector
predicate. Some are required and some are optional.
All parameters are provided together in a JSON object, where the
parameter names are the object keys. Parameter values may be simple JSON
values such as a string or a boolean, or they can be lists or objects.
All of the creation parameters can also be set conveniently from the Create Connector user interface in the GraphDB Workbench without any knowledge of JSON.
readonly
(boolean), optional, read-only modeA read-only connector will index all existing data in the repository at creation time, but, unlike non-read-only connectors, it will:
Not react to updates. Changes will not be synced to the connector.
Not keep any extra structures (such as the internal Lucene index for tracking updates to chains)
The only way to index changes in data after the connector has been created is to repair (or drop/recreate) the connector.
importGraph
(boolean), optional, specifies that the RDF data from which to create the connector is in a special virtual graphUsed to make a Lucene index from temporary RDF data inserted in the same transaction. It requires read-only mode and creates a connector whose data will come from statements inserted into a special virtual graph instead of data contained in the repository. The virtual graph is
luc:graph
, where the prefixluc:
is as defined before. Data needs to be inserted into this graph before the connector create statement is executed.Both the insertion into the special graph and create statement must be in the same transaction. In GDB Workbench, this can be done by pasting them one after another in the SPARQL editor and putting a semicolon at the end of the first INSERT. This functionality requires readonly mode.
PREFIX luc: <http://www.ontotext.com/connectors/lucene#> INSERT { GRAPH luc:graph { ... } } WHERE { ... }; PREFIX luc: <http://www.ontotext.com/connectors/lucene#> PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#> INSERT DATA { luc-index:my_index luc:createConnector ''' { "readonly": true, "importGraph": true, "fields": [], "languages": [], "types": [], } ''' . }
importFile
(string), optional, an RDF file with data from which to create the connectorCreates a connector whose data will come from an RDF file on the file system instead of data contained in the repository. The value must be the full path to the RDF file. This functionality requires readonly mode.
detectFields
(boolean), optional, detects fieldsThis mode introduces automatic field detection when creating a connector. You can omit specifying
fields
in JSON. Instead, you will get automaticfields
: each corresponds to a single predicate, and its field name is the same as the predicate (so you need to use escaping when issuing Lucene queries).In this mode, specifying types is optional too. If types are not provided, then all types will be indexed. This mode requires importGraph or importFile.
Once the connector is created, you can inspect the detected fields in the Connector management section of the Workbench.
analyzer
(string), optional, specifies Lucene analyzerThe Lucene Connector supports custom Analyzer implementations. They may be specified via the
analyzer
parameter whose value must be a fully qualified name of a class that extendsorg.apache.lucene.analysis.Analyzer
. The class requires either a default constructor or a constructor with exactly one parameter of typeorg.apache.lucene.util.Version
. For example, these two classes are valid implementations:package com.ontotext.example; import org.apache.lucene.analysis.Analyzer; public class FancyAnalyzer extends Analyzer { public FancyAnalyzer() { ... } ... }
package com.ontotext.example; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.util.Version; public class SmartAnalyzer extends Analyzer { public SmartAnalyzer(Version luceneVersion) { ... } ... }
FancyAnalyzer
andSmartAnalyzer
can then be used by specifying their fully qualified names, for example:... "analyzer": "com.ontotext.example.SmartAnalyzer", ...
types
(list of IRIs), required, specifies the types of entities to syncThe RDF types of entities to sync are specified as a list of IRIs. At least one type IRI is required.
Use the pseudo-IRI
$any
to sync entities that have at least one RDF type.Use the pseudo-IRI
$untyped
to sync entities regardless of whether they have any RDF type, see also the examples in General full-text search with the connectors.
languages
(list of strings), optional, valid languages for literalsRDF data is often multilingual but you can map only some of the languages represented in the literal values. This can be done by specifying a list of language ranges to be matched to the language tags of literals according to RFC 4647, Section 3.3.1. Basic Filtering. In addition, an empty range can be used to include literals that have no language tag. The list of language ranges maps all existing literals that have matching language tags.
fields
(list of field objects), required, defines the mapping from RDF to LuceneThe fields define exactly what parts of each entity will be synchronized as well as the specific details on the connector side. The field is the smallest synchronization unit and it maps a property chain from GraphDB to a field in Lucene. The fields are specified as a list of field objects. At least one field object is required. Each field object has further keys that specify details.
fieldName
(string), required, the name of the field in LuceneThe name of the field defines the mapping on the connector side. It is specified by the key
fieldName
with a string value. The field name is used at query time to refer to the field. There are few restrictions on the allowed characters in a field name but to avoid unnecessary escaping (which depends on how Lucene parses its queries), we recommend to keep the field names simple.
fieldNameTransform
(one ofnone
,predicate
orpredicate.localName
), optional,none
by defaultDefines an optional transformation of the field name. Although
fieldName
is always required, it is ignored iffieldNameTransform
ispredicate
orpredicate.localName
.none
: The field name is supplied via thefieldName
option.predicate
: The field name is equal to the full IRI of the last predicate of the chain, e.g., if the last predicate washttp://www.w3.org/2000/01/rdf-schema#label
, then the field name will behttp://www.w3.org/2000/01/rdf-schema#label
too.predicate.localName
: The field name is the derived from the local name of the IRI of the last predicate of the chain, e.g., if the last predicate washttp://www.w3.org/2000/01/rdf-schema#comment
, then the field name will becomment
.
See Indexing all literals in distinct fields for an example.
propertyChain
(list of IRI), required, defines the property chain to reach the valueThe property chain (
propertyChain
) defines the mapping on the GraphDB side. A property chain is defined as a sequence of triples where the entity IRI is the subject of the first triple, its object is the subject of the next triple and so on. In this model, a property chain with a single element corresponds to a direct property defined by a single triple. Property chains are specified as a list of IRIs where at least one IRI must be provided.See Copy fields for defining multiple fields with the same property chain.
See Multiple property chains per field for defining a field whose values are populated from more than one property chain.
See Indexing language tags for defining a field whose values are populated with the language tags of literals.
See Indexing the IRI of an entity for defining a field whose values are populated with the IRI of the indexed entity.
See Wildcard literal indexing for defining a field whose values are populated with literals regardless of their predicate.
valueFilter
(string), optional, specifies the value filter for the fieldSee also Entity filtering.
defaultValue
(string), optional, specifies a default value for the fieldThe default value (
defaultValue
) provides means for specifying a default value for the field when the property chain has no matching values in GraphDB. The default value can be a plain literal, a literal with a datatype (xsd:
prefix supported), a literal with language, or a IRI. It has no default value.
indexed
(boolean), optional, defaulttrue
If indexed, a field is available for Lucene queries.
true
by default.This option corresponds to Lucene’s field option
"indexed"
.
stored
(boolean), optional, defaulttrue
Fields can be stored in Lucene and this is controlled by the Boolean option
"stored"
. Stored fields are required for retrieving snippets.true
by default.This options corresponds to Lucene’s property
"stored"
.
analyzed
(boolean), optional, defaulttrue
When literal fields are indexed in Lucene, they will be analyzed according to the analyzer settings. Should you require that a given field is not analyzed, you may use
"analyzed"
. This option has no effect for IRIs (they are never analyzed).true
by default.This option corresponds to Lucene’s property “tokenized”.
multivalued
(boolean), optional, defaulttrue
RDF properties and synchronized fields may have more than one value. If
"multivalued"
is set totrue
, all values will be synchronized to Lucene. If set tofalse
, only a single value will be synchronized.true
by default.
ignoreInvalidValues
(boolean), optional, defaultfalse
Per-field option that controls what happens when a value cannot be converted to the requested (or previously detected) type.
False
by default.Example use: when an invalid date literal like
"2021-02-29"^^xsd:date
(2021 is not a leap year) needs to be indexed as a date, or when an IRI needs to be indexed as a number.Note that some conversions are always valid: any literal to an FTS field, any non-literal (IRI, blank node, embedded triple) to a non-analyzed field. When
true
, such values will be skipped with a note in the logs. Whenfalse
, such values will break the transaction.
facet
(boolean), optional, defaulttrue
Lucene needs to index data in a special way, if it will be used for faceted search. This is controlled by the Boolean option “facet”. True by default. Fields that are not synchronized for faceting are also not available for faceted search.
datatype
(string), optional, the manual datatype overrideBy default, the Lucene GraphDB Connector uses datatype of literal values to determine how they must be mapped to Lucene types. For more information on the supported datatypes, see Datatype mapping.
The datatype mapping can be overridden through the parameter
"datatype"
, which can be specified per field. The value of"datatype"
can be any of thexsd:
types supported by the automatic mapping.
valueFilter
(string), optional, specifies the top-level value filter for the documentSee also Entity filtering.
documentFilter
(string), optional, specifies the top-level document filter for the documentSee also Entity filtering.
Special field definitions¶
Copy fields¶
Often, it is convenient to synchronize one and the same data multiple
times with different settings to accommodate for different use cases,
e.g., faceting or sorting vs full-text search. The Lucene GraphDB
Connector has explicit support for fields that copy their value from
another field. This is achieved by specifying a single element in the
property chain of the form @otherFieldName
, where otherFieldName
is
another non-copy field. Take the following example:
...
"fields": [
{
"fieldName": "grape",
"facet": false,
"propertyChain": [
"http://www.ontotext.com/example/wine#madeFromGrape",
"http://www.w3.org/2000/01/rdf-schema#label"
],
"analyzed": true,
},
{
"fieldName": "grapeFacet",
"propertyChain": [
"@grape"
],
"analyzed": false,
}
]
...
The snippet creates an analyzed field “grape” and a non-analyzed field “grapeFacet”, both fields are populated with the same values and “grapeFacet” is defined as a copy field that refers to the field “facet”.
Note
The connector handles copy fields in a more optimal way than specifying a field with exactly the same property chain as another field.
Multiple property chains per field¶
Sometimes, you have to work with data models that define the same concept (in terms of what you want to index in Lucene) with more than one property chain, e.g., the concept of “name” could be defined as a single canonical name, multiple historical names and some unofficial names. If you want to index these together as a single field in Lucene you can define this as a multiple property chains field.
Fields with multiple property chains are defined as a set of separate
virtual fields that will be merged into a single physical field
when indexed. Virtual fields are distinguished by the suffix $xyz
,
where xyz
is any alphanumeric sequence of convenience. For example,
we can define the fields name$1
and name$2
like this:
...
"fields": [
{
"fieldName": "name$1",
"propertyChain": [
"http://www.ontotext.com/example#canonicalName"
],
"fieldName": "name$2",
"propertyChain": [
"http://www.ontotext.com/example#historicalName"
]
...
},
...
The values of the fields name$1
and name$2
will be merged
and synchronized to the field name
in Lucene.
Note
You cannot mix suffixed and unsuffixed fields with the same name,
e.g., if you defined myField$new
and myField$old
you cannot have
a field called just myField
.
Filters and fields with multiple property chains¶
Filters can be used with fields defined with multiple property chains. Both the physical field values and the individual virtual field values are available:
Physical fields are specified without the suffix, e.g.,
?myField
Virtual fields are specified with the suffix, e.g.,
?myField$2
or?myField$alt
.
Note
Physical fields cannot be combined with parent()
as their values
come from different property chains. If you really need to filter
the same parent level, you can rewrite parent(?myField) in (<urn:x>, <urn:y>)
as parent(?myField$1) in (<urn:x>, <urn:y>) || parent(?myField$2)
in (<urn:x>, <urn:y>) || parent(?myField$3) ...
and surround it with
parentheses if it is a part of a bigger expression.
Indexing language tags¶
The language tag of an RDF literal can be indexed by specifying a property chain, where the last element is
the pseudo-IRI lang()
. The property preceding lang()
must lead to a literal value. For example:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
INSERT DATA {
luc-index:my_index luc:createConnector '''
{
"types": ["http://www.ontotext.com/example#gadget"],
"fields": [
{
"fieldName": "name",
"propertyChain": [
"http://www.ontotext.com/example#name"
]
},
{
"fieldName": "nameLanguage",
"propertyChain": [
"http://www.ontotext.com/example#name",
"lang()"
]
}
],
}
''' .
}
The above connector will index the language tag of each literal value of the property http://www.ontotext.com/example#name
into the field nameLanguage
.
Indexing named graphs¶
The named graph of a given value can be indexed by ending a property chain with the special pseudo-URI graph()
. Indexing the named graph of the value instead of the value itself allows searching by named graph.
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
INSERT DATA {
luc-index:my_index luc:createConnector '''
{
"types": ["http://www.ontotext.com/example#gadget"],
"fields": [
{
"fieldName": "name",
"propertyChain": [
"http://www.ontotext.com/example#name"
]
},
{
"fieldName": "nameGraph",
"propertyChain": [
"http://www.ontotext.com/example#name",
"graph()"
]
}
],
}
''' .
}
The above connector will index the named graph of each value of the property http://www.ontotext.com/example#name
into the field nameGraph
.
Wildcard literal indexing¶
In this mode, the last element of a property chain is a wildcard that will match any predicate that leads to a literal value.
Use the special pseudo-IRI $literal
as the last element of the property chain to activate it.
Note
Currently, it really means any literal, including literals with data types.
For example:
{
"fields" : [ {
"propertyChain" : [ "$literal" ],
"fieldName" : "name"
}, {
"propertyChain" : [ "http://example.com/description", "$literal" ],
"fieldName" : "description"
}
...
}
See Indexing all literals for a detailed example.
Indexing the IRI of an entity¶
Sometimes you may need the IRI of each entity (e.g., http://www.ontotext.com/example/wine#Franvino
from our
small example dataset) indexed as a regular field. This can be achieved by specifying a property chain with a single
property referring to the pseudo-IRI $self
. For example:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
INSERT DATA {
luc-index:my_index luc:createConnector '''
{
"types": [
"http://www.ontotext.com/example/wine#Wine"
],
"fields": [
{
"fieldName": "entityId",
"propertyChain": [
"$self"
],
},
{
"fieldName": "grape",
"propertyChain": [
"http://www.ontotext.com/example/wine#madeFromGrape",
"http://www.w3.org/2000/01/rdf-schema#label"
]
},
]
}
''' .
}
The above connector will index the IRI of each wine into the field entityId
.
Datatype mapping¶
The Lucene GraphDB Connector maps different types of RDF values to different types of Lucene values according to the basic type of the RDF value (IRI or literal) and the datatype of literals. The autodetection uses the following mapping:
RDF value |
RDF datatype |
Indexed in Lucene as |
---|---|---|
IRI |
n/a |
|
literal |
any type not explicitly mentioned below |
|
literal |
|
|
literal |
|
|
literal |
|
|
literal |
|
|
literal |
|
|
literal |
|
|
literal |
|
|
The datatype mapping can be affected by the synchronization options too,
e.g., a non-analyzed field that has xsd:long
values is indexed as a
non-tokenized Field
.
Note
For any given field the automatic mapping uses the first value it sees. This works fine for clean datasets but might lead to problems, if your dataset has non-normalized data, e.g., the first value has no datatype but other values have.
It is therefore recommended to set datatype to a fixed value, e.g. xsd:date
.
Please note that the commonly used xsd:integer
and xsd:decimal
datatypes are not indexed as numbers because they represent infinite precision numbers.
You can override that by using the datatype option to cast to xsd:long, xsd:double, xsd:float
as appropriate.
Date and time conversion¶
RDF and Lucene use different models to represent dates and times. Lucene stores values as offsets in seconds for sorting, or as padded ISO strings for range search, e.g., "2020-03-23T12:34:56"^^xsd:dateTime
will be stored as the string 20200323123456
.
Years in RDF values use the XSD format and are era years, where positive values denote the common era and negative values denote years before the common era. There is no year zero.
Years in padded string date and time Lucene values use the ISO format and are proleptic years, i.e., positive values denote years from the common era with any previous eras just going down by one mathematically so there is year zero.
In short:
year 2020 CE = year 2020 in XSD = year 2020 in ISO.
…
year 1 CE = year 1 in XSD = year 1 in ISO.
year 1 BCE = year -1 in XSD = year 0 in ISO.
year 2 BCE = year -2 in XSD = year -1 in ISO.
…
All years coming from RDF literals will be converted to ISO before indexing in Lucene.
Note
Range search will not work as expected with negative years. This is a limitation of storing the date and time as strings.
XSD date and time values support timezones. In order to have a unified view over values with different timezones, all xsd:dateTime
values will be normalized to the UTC time zone before indexing.
In addition to that, XSD defines the lack of a timezone as undetermined. Since we do not want to have any undetermined state in the indexing system, we define the undetermined time zone as UTC, i.e., "2020-02-14T12:00:00"^^xsd:dateTime
is equivalent to "2020-02-14T12:00:00Z"^^xsd:dateTime
(Z is the UTC time zone, also known as +00:00).
Also note that XSD dates may have a timezone, which leads to additional complications. E.g., "2020-01-01+02:00"^^xsd:date
(the date 1 January 2020 in the +02:00 timezone) will be normalized to 2019-12-31T22:00:00Z
(a different day!) if strict timezone adherence is followed. We have chosen to ignore the timezone on any values that do not have an associated time value, e.g.:
"2020-02-15+02:00"^^xsd:date
"2020-05-08-05:00"^^xsd:date
All of the above will be treated as if they specified UTC as their timezone.
Entity filtering¶
The Lucene connector supports three kinds of entity filters used to fine-tune the set of entities
and/or individual values for the configured fields, based on the field
value. Entities and field values are synchronized to Lucene if,
and only if, they pass the filter. The filters are similar to a
FILTER()
inside a SPARQL query but not exactly the same. In them, each configured
field can be referred to by prefixing it with a
?
, much like referring to a variable in SPARQL.
Types of filters¶
- Top-level value filter
The top-level value filter is specified via
valueFilter
. It is evaluated prior to anything else when only the document ID is known and it may not refer to any field names but only to the special field$this
that contains the current document ID. Failing to pass this filter removes the entire document early in the indexing process and it can be used to introduce more restrictions similar to the built-in filtering by type via thetypes
property.- Top-level document filter
The top-level document filter is specified via
documentFilter
. This filter is evaluated last when all of the document has been collected and it decides whether to include the document in the index. It can be used to enforce global document restrictions, e.g., certain fields are required or a document needs to be indexed only if a certain field value meets specific conditions.- Per-field value filter
The per-field value filter is specified via
valueFilter
inside the field definition of the field whose values are to be filtered. The filter is evaluated while collecting the data for the field when each field value becomes available.The variable that contains the field value is
$this
. Other field names can be used to filter the current field’s value based on the value of another field, e.g.,$this > ?age
will compare the current field value to the value of the field age (see also Two-variable filtering). Failing to pass the filter will remove the current field value.
See also Migrating from GraphDB 9.x.
Filter operators¶
The filter operators are used to test if the value of a given field satisfies a certain condition.
Field comparisons are done on original RDF values before they are converted to Lucene values using datatype mapping.
Operator |
Meaning |
---|---|
|
Tests if the field Example:
?status in ("active", "new") |
|
The negated version of the in-operator. Example:
?status not in ("archived") |
|
Tests if the field Example:
bound(?name) |
|
Tests if the field Example:
isExplicit(?name) |
?var = value (equal to)?var != value (not equal to)?var > value (greater than)?var >= value (greater than or
equal to)?var < value (less than)?var <= value (less than or
equal to) |
RDF value comparison operators that compare RDF values similarly to the equivalent SPARQL operators.
The field
var ’s value will be compared to the specified RDF value. When comparing RDF values that are
literals, their datatypes must be compatible, e.g., xsd:integer and xsd:long but not xsd:string
and xsd:date . Values that do not match are treated as if they were not present in the repository.Examples:
Given that
height ’s value is "150"^^xsd:int and dateOfBirth ’s value is "1989-12-31"^^xsd:date ,
then:?height = "150"^^xsd:int is true ?height = "150"^^xsd:long is true ?height = "150" is false ?height != "151"^^xsd:int is true ?height != "150" is true ?height > "150"^^xsd:int is false ?height >= "150"^^xsd:int is true ?dateOfBirth < "1990-01-01"^^xsd:date is true |
|
Tests if the field
var ’s value matches the given regular expression pattern.If the “i” flag option is present, this indicates that the match operates in case-insensitive mode.
Values that do not match are treated as if they were not present in the repository.
Example:
regex(?name, "^mrs?", "i") |
|
Logical disjunction of expressions Examples:
bound(?name) || bound(?company) bound(?name) or bound(?company) |
|
Logical conjunction of expressions Examples:
bound(?status) && ?status in ("active", "new") bound(?status) and ?status in ("active", "new") |
|
Logical negation of expression Example:
!bound(?company) |
|
Grouping of expressions Example:
(bound(?name) or bound(?company)) && bound(?address) |
Filter modifiers¶
In addition to the operators, there are some constructions that can be used to write filters based not on the values of a field but on values related to them:
- Accessing the previous element in the chain
The construction
parent(?var)
is used for going to a previous level in a property chain. It can be applied recursively as many times as needed, e.g.,parent(parent(parent(?var)))
goes back in the chain three times. The effective value ofparent(?var)
can be used with thein
ornot in
operator like this:parent(?company) in (<urn:a>, <urn:b>)
, or in thebound
operator like this:parent(bound(?var))
.
- Accessing an element beyond the chain
The construction
?var -> uri
(alternatively,?var o uri
or just?var uri
) is used to access additional values that are accessible through the propertyuri
. In essence, this construction corresponds to the triple patternvalue
uri
?effectiveValue
, where?value
is a value bound by the fieldvar
. The effective value of?var -> uri
can be used with thein
ornot in
operator like this:?company -> rdf:type in (<urn:c>, <urn:d>)
. It can be combined withparent()
like this:parent(?company) -> rdf:type in (<urn:c>, <urn:d>)
. The same construction can be applied to thebound
operator like this:bound(?company -> <urn:hasBranch>)
, or even combined withparent()
like this:bound(parent(?company) -> <urn:hasGroup>)
.The IRI parameter can be a full IRI within
< >
or the special stringrdf:type
(alternatively, justtype
), which will be expanded tohttp://www.w3.org/1999/02/22-rdf-syntax-ns#type
.
- Filtering by RDF graph
The construction
graph(?var)
is used for accessing the RDF graph of a field’s value. A typical use case is to sync only explicit values:graph(?a) not in (<http://www.ontotext.com/implicit>)
but usingisExplicit(?a)
is the recommended way.The construction can be combined with
parent()
like this:graph(parent(?a)) in (<urn:a>)
.
- Filtering by language tags
The construction
lang(?var)
is used for accessing the language tag of field’s value (only RDF literals can have a language tag). The typical use case is to sync only values written in a given language:lang(?a) in ("de", "it", "no")
. The construction can be combined withparent()
and an element beyond the chain like this:lang(parent(?a) -> <http://www.w3.org/2000/01/rdf-schema#label>) in ("en", "bg")
. Literal values without language tags can be filtered by using an empty tag:""
.
- Current context variable
$this
The special field variable
$this
(and not?this
,?$this
,$?this
) is used to refer to the current context. In the top-level value filter and the top-level document filter, it refers to the document. In the per-field value filter, it refers to the currently filtered field value. In the nested document filter, it refers to the nested document.
ALL()
quantifierIn the context of document-level filtering, a match is
true
if at least one of potentially many field values match, e.g.,?location = <urn:Europe>
would returntrue
if the document contains{ "location": ["<urn:Asia>", "<urn:Europe>"] }
.In addition to this, you can also use the ALL() quantifier when you need all values to match, e.g.,
ALL(?location) = <urn:Europe>
would not match with the above document because<urn:Asia>
does not match.
- Entity filters and default values
Entity filters can be combined with default values in order to get more flexible behavior.
If a field has no values in the RDF database, the
defaultValue
is used. But if a field has some values,defaultValue
is NOT used, even if all values are filtered out. See an example in Basic entity filter.A typical use-case for an entity filter is having soft deletes, i.e., instead of deleting an entity, it is marked as deleted by the presence of a specific value for a given property.
Two-variable filtering¶
Besides comparing a field value to one or more constants or running an existential check on the field value, some use cases also require comparing the field value to the value of another field in order to produce the desired result. GraphDB solves this by supporting two-variable filtering in the per-field value filter and the top-level document filter.
Note
This type of filtering is not possible in the top-level value filter because the only variable that is available there is $this
.
In the top-level document filter, there are no restrictions as all values are available at the time of evaluation.
In the per-field value filter, two-variable filtering will reorder the defined fields such that values for other fields are already available when the current field’s filter is evaluated. For example, let’s say we defined a filter $this > ?salary
for the field price
. This will force the connector to process the field salary
first, apply its per-field value filter if any, and only then start collecting and filtering the values for the field price
.
Cyclic dependencies will be detected and reported as an invalid filter. For example, if in addition to the above we define a per-field value filter ?price > "1000"^^xsd:int
for the field salary
, a cyclic dependency will be detected as both price
and salary
will require the other field being indexed first.
Basic entity filter example¶
Given the following RDF data:
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix example: <http://www.ontotext.com/example#> .
# the entity below will be synchronised because it has a matching value for city: ?city in ("London")
example:alpha
rdf:type example:gadget ;
example:name "John Synced" ;
example:city "London" .
# the entity below will not be synchronised because it lacks the property completely: bound(?city)
example:beta
rdf:type example:gadget ;
example:name "Peter Syncfree" .
# the entity below will not be synchronized because it has a different city value:
# ?city in ("London") will remove the value "Liverpool" so bound(?city) will be false
example:gamma
rdf:type example:gadget ;
example:name "Mary Syncless" ;
example:city "Liverpool" .
If you create a connector instance such as:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
INSERT DATA {
luc-index:my_index luc:createConnector '''
{
"types": ["http://www.ontotext.com/example#gadget"],
"fields": [
{
"fieldName": "name",
"propertyChain": ["http://www.ontotext.com/example#name"]
},
{
"fieldName": "city",
"propertyChain": ["http://www.ontotext.com/example#city"],
"valueFilter": "$this = \\"London\\""
}
],
"documentFilter": "bound(?city)"
}
''' .
}
The entity :beta
is not synchronized as it has no value for city
.
To handle such cases, you can modify the connector configuration to
specify a default value for city
:
...
{
"fieldName": "city",
"propertyChain": ["http://www.ontotext.com/example#city"],
"defaultValue": "London"
}
...
}
The default value is used for the entity :beta
as it has no value for city
in the repository. As the value is “London”, the entity is synchronized.
Advanced entity filter example¶
Sometimes, data represented in RDF is not well suited to map directly to
non-RDF. For example, if you have news articles and they can be tagged
with different concepts (locations, persons, events, etc.), one possible
way to model this is a single property :taggedWith
. Consider the
following RDF data:
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix example2: <http://www.ontotext.com/example2#> .
example2:Berlin
rdf:type example2:Location ;
rdfs:label "Berlin" .
example2:Mozart
rdf:type example2:Person ;
rdfs:label "Wolfgang Amadeus Mozart" .
example2:Einstein
rdf:type example2:Person ;
rdfs:label "Albert Einstein" .
example2:Cannes-FF
rdf:type example2:Event ;
rdfs:label "Cannes Film Festival" .
example2:Article1
rdf:type example2:Article ;
rdfs:comment "An article about a film about Einstein's life while he was a professor in Berlin." ;
example2:taggedWith example2:Berlin ;
example2:taggedWith example2:Einstein ;
example2:taggedWith example2:Cannes-FF .
example2:Article2
rdf:type example2:Article ;
rdfs:comment "An article about Berlin." ;
example2:taggedWith example2:Berlin .
example2:Article3
rdf:type example2:Article ;
rdfs:comment "An article about Mozart's life." ;
example2:taggedWith example2:Mozart .
example2:Article4
rdf:type example2:Article ;
rdfs:comment "An article about classical music in Berlin." ;
example2:taggedWith example2:Berlin ;
example2:taggedWith example2:Mozart .
example2:Article5
rdf:type example2:Article ;
rdfs:comment "A boring article that has no tags." .
example2:Article6
rdf:type example2:Article ;
rdfs:comment "An article about the Cannes Film Festival in 2013." ;
example2:taggedWith example2:Cannes-FF .
Assume you want to map this data to Lucene, so that the property example2:taggedWith x
is mapped to separate fields taggedWithPerson
and taggedWithLocation
, according to the type of x
(whereas we are not interested in Events). You can map taggedWith
twice to different fields
and then use an entity filter to get the desired values:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
INSERT DATA {
luc-index:my_index luc:createConnector '''
{
"types": ["http://www.ontotext.com/example2#Article"],
"fields": [
{
"fieldName": "comment",
"propertyChain": ["http://www.w3.org/2000/01/rdf-schema#comment"]
},
{
"fieldName": "taggedWithPerson",
"propertyChain": ["http://www.ontotext.com/example2#taggedWith"],
"valueFilter": "$$this -> type = <http://www.ontotext.com/example2#Person>"
},
{
"fieldName": "taggedWithLocation",
"propertyChain": ["http://www.ontotext.com/example2#taggedWith"],
"valueFilter": "$this -> type = <http://www.ontotext.com/example2#Location>"
}
]
}
''' .
}
Note
type
is the short way to write <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
.
The six articles in the RDF data above will be mapped as such:
Article IRI |
Value in taggedWithPerson |
Value in taggedWithLocation |
Explanation |
---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||
|
|
This can be checked by issuing a faceted search for taggedWithLocation
and taggedWithPerson
:
PREFIX luc: <http://www.ontotext.com/connectors/lucene#>
PREFIX luc-index: <http://www.ontotext.com/connectors/lucene/instance#>
SELECT ?facetName ?facetValue ?facetCount {
?search a luc-index:my_index ;
luc:facetFields "taggedWithLocation,taggedWithPerson" ;
luc:facets [
luc:facetName ?facetName ;
luc:facetValue ?facetValue ;
luc:facetCount ?facetCount
]
}
If the filter was applied, you should get only :Berlin
for
taggedWithLocation
and only :Einstein
and :Mozart
for taggedWithPerson
:
facetName |
facetValue |
facetCount |
---|---|---|
taggedWithLocation |
|
3 |
taggedWithPerson |
|
2 |
taggedWithPerson |
|
1 |
Overview of connector predicates¶
The following diagram shows a summary of all predicates that can
administrate (create, drop, check status) connector instances or issue
queries and retrieve results. It can be used as a quick reference of
what a particular predicate needs to be attached to. For example, to
retrieve entities, you need to use :entities
on a search instance and to
retrieve snippets, you need to use :snippets
on an entity. Variables
that are bound as a result of a query are shown in green, blank helper
nodes are shown in blue, literals in red, and IRIs in orange. The
predicates are represented by labeled arrows.
Caveats¶
Order of control¶
Even though SPARQL per se is not sensitive to the order of triple patterns, the Lucene GraphDB Connector expects to receive certain predicates before others so that queries can be executed properly. In particular, predicates that specify the query or query options need to come before any predicates that fetch results.
The diagram in Overview of connector predicates provides a quick overview of the predicates.
Upgrading from previous versions¶
Migrating from GraphDB 9.x¶
GraphDB 10.0 introduces major changes to the filtering mechanism of the connectors. Existing connector instances will not be usable and attempting to use them for queries or updates will throw an error.
If your GraphDB 9.x (or older) connector definitions do not include an entity filter, you can simply repair them.
If your GraphDB 9.x (or older) connector definitions do include an entity filter with the entityFilter
option, you need to rewrite the filter with one of the current filter types:
Save your existing connector definition.
Drop the connector instance.
In general, most older connector filters can be easily rewritten using the per-field value filter and top-level document filter. Rewrite the filters as follows:
Rule of thumb:
If you want to remove individual values, i.e., if the operand is not
BOUND()
–-> rewrite with per-field value filter.If you want to remove entire documents, i.e., if the operand is
BOUND()
–> rewrite with top-level document filter.
So if we take the example:
?location = <urn:Europe> AND BOUND(?location) AND ?type IN (<urn:Foo>, <urn:Bar>)
It needs to be rewritten like this:
Per-field rule on field
location
:$this = <urn:Europe>
Per-field rule on field
type
:$this IN (<urn:Foo>, <urn:Bar>)
Top-level document filter:
BOUND(?location)
Recreate the connector instance using the new definition.