Reasoning¶
What’s in this document?
Hint
To get the full benefit from this section, you need some basic knowledge of the two principle Reasoning strategies for rule-based inference - forward chaining and backward chaining.
GraphDB performs reasoning based on forward chaining of entailment rules defined using RDF triple patterns with variables. GraphDB’s reasoning strategy is one of Total materialization, where the inference rules are applied repeatedly to the asserted (explicit) statements until no further inferred (implicit) statements are produced.
The GraphDB repository uses configured rulesets to compute all inferred statements at load time. To some extent, this process increases the processing cost and time taken to load a repository with a large amount of data. However, it has the desirable advantage that subsequent query evaluation can proceed extremely quickly.
Logical formalism¶
GraphDB uses a notation almost identical to R-Entailment defined by Horst. RDFS inference is achieved via a set of axiomatic triples and entailment rules. These rules allow the full set of valid inferences using RDFS semantics to be determined.
Herman ter Horst defines RDFS extensions for more general rule support and a fragment of OWL, which is more expressive than DLP and fully compatible with RDFS. First, he defines R-entailment, which extends RDFS-entailment in the following way:
It can operate on the basis of any set of rules R (i.e., allows for extension or replacement of the standard set, defining the semantics of RDFS);
It operates over so-called generalized RDF graphs, where blank nodes can appear as predicates (a possibility disallowed in RDF);
Rules without premises are used to declare axiomatic statements;
Rules without consequences are used to detect inconsistencies (integrity constraints).
Tip
To learn more, see OWL Compliance.
Rule format and semantics¶
The rule format and the semantics enforced in GraphDB is analogous to R-entailment with the following differences:
Free variables in the head (without binding in the body) are treated as blank nodes. This feature must be used with extreme caution because custom rulesets can easily be created, which recursively infer an infinite number of statements making the semantics intractable;
Variable inequality constraints can be specified in addition to the triple patterns (they can be placed after any premise or consequence). This leads to less complexity compared to R-entailment;
the
cut
operator can be associated with rule premises. This is an optimization that tells the rule compiler not to generate a variant of the rule with the identified rule premise as the first triple pattern;Context can be used for both rule premises and rule consequences allowing more expressive constructions that utilize ‘intermediate’ statements contained within the given context URI;
Consistency checking rules do not have consequences and will indicate an inconsistency when the premises are satisfied;
Axiomatic triples can be provided as a set of statements, although these are not modeled as rules with empty bodies.
The ruleset file¶
GraphDB can be configured via rulesets - sets of axiomatic triples, consistency checks and entailment rules, which determine the applied semantics.
A ruleset file has three sections named Prefices
, Axioms
, and
Rules
. All sections are mandatory and must appear sequentially in
this order. Comments are allowed anywhere and follow the Java
convention, i.e.,. "/* ... */"
for block comments and "//"
for end
of line comments.
For historic reasons, the way in which terms (variables, URLs and literals) are written differs from Turtle and SPARQL:
URLs in Prefixes are written without angle brackets
variables are written without
?
or$
and can include multiple alphanumeric charsURLs are written in brackets, no matter if they are use prefix or are spelled in full
datatype URLs are written without brackets, e.g.,
a <owl:maxQualifiedCardinality> "1"^^xsd:nonNegativeInteger
See the examples below and be careful when writing terms.
Prefixes¶
This section defines the abbreviations for the namespaces used in the rest of the file. The syntax is:
shortname : URI
The following is an example of what a typical prefixes section might look like:
Prefices
{
rdf : http://www.w3.org/1999/02/22-rdf-syntax-ns#
rdfs : http://www.w3.org/2000/01/rdf-schema#
owl : http://www.w3.org/2002/07/owl#
xsd : http://www.w3.org/2001/XMLSchema#
}
Axioms¶
This section asserts axiomatic triples, which usually describe the
meta-level primitives used for defining the schema such as rdf:type
,
rdfs:Class
, etc. It contains a list of the (variable free) triples,
one per line.
For example, the RDF axiomatic triples are defined in the following way:
Axioms
{
// RDF axiomatic triples
<rdf:type> <rdf:type> <rdf:Property>
<rdf:subject> <rdf:type> <rdf:Property>
<rdf:predicate> <rdf:type> <rdf:Property>
<rdf:object> <rdf:type> <rdf:Property>
<rdf:first> <rdf:type> <rdf:Property>
<rdf:rest> <rdf:type> <rdf:Property>
<rdf:value> <rdf:type> <rdf:Property>
<rdf:nil> <rdf:type> <rdf:List>
}
Note
Axiomatic statements are considered to be inferred for the purpose of query answering because they are a result of semantic interpretation defined by the chosen ruleset.
Rules¶
This section is used to define entailment rules and consistency checks, which share a similar format. Each definition consists of premises and corollaries that are RDF statements defined with subject, predicate, object and optional context components. The subject and object can each be a variable, blank node, literal, a full URI, or the short name for a URI. The predicate can be a variable, a full URI, or a short name for a URI. If given, the context must be a full URI or a short name for a URI. Variables are alpha-numeric and must begin with a letter.
If the context is provided, the statements produced as rule consequences are not ‘visible’ during normal query answering. Instead, they can only be used as input to this or other rules and only when the rule premise explicitly uses the given context (see the example below).
Furthermore, inequality constraints can be used to state that the values of the variables in a statement must not be equal to a specific full URI (or its short name), a blank node, or to the value of another variable within the same rule. The behavior of an inequality constraint depends on whether it is placed in the body or the head of a rule. If it is placed in the body of a rule, then the whole rule will not ‘fire’ if the constraint fails, i.e., the constraint can be next to any statement pattern in the body of a rule with the same behavior (the constraint does not have to be placed next to the variables it references). If the constraint is in the head, then its location is significant because a constraint that does not hold will prevent only the statement it is adjacent to from being inferred.
Entailment rules¶
The syntax of a rule definition is as follows:
Id: <rule_name>
<premises> <optional_constraints>
-------------------------------
<consequences> <optional_constraints>
where each premise and consequence is on a separate line.
The following example helps to illustrate the possibilities:
Rules
{
Id: rdf1_rdfs4a_4b
x a y
-------------------------------
x <rdf:type> <rdfs:Resource>
a <rdf:type> <rdfs:Resource>
y <rdf:type> <rdfs:Resource>
Id: rdfs2
x a y [Constraint a != <rdf:type>]
a <rdfs:domain> z [Constraint z != <rdfs:Resource>]
-------------------------------
x <rdf:type> z
Id: owl_FunctProp
p <rdf:type> <owl:FunctionalProperty>
x p y [Constraint y != z, p != <rdf:type>]
x p z [Constraint z != y] [Cut]
-------------------------------
y <owl:sameAs> z
}
The symbols p
, x
, y
, z
and a
are variables. The
second rule contains two constraints that reduce the number of bindings
for each premise, i.e., they ‘filter out’ those statements where the
constraint does not hold.
In a forward chaining inference step, a rule is interpreted as meaning that for all possible ways of satisfying the premises, the bindings for the variables are used to populate the consequences of the rule. This generates new statements that will manifest themselves in the repository, e.g., by being returned as query results.
The last rule contains an example of using the Cut
operator,
which is an optimization hint for the rule compiler. When rules are
compiled, a different variant of the rule is created for each premise,
so that each premise occurs as the first triple pattern in one of the
variants. This is done so that incoming statements can be efficiently
matched to appropriate inferences rules. However, when a rule contains
two or more premises that match identical triples patterns, but using
different variable names, the extra variant(s) are redundant and
better efficiency can be achieved by simply not creating the extra rule
variant(s).
In the above example, the rule owl_FunctProp
would by default be
compiled in three variants:
p <rdf:type> <owl:FunctionalProperty>
x p y
x p z
-------------------------------
y <owl:sameAs> z
x p y
p <rdf:type> <owl:FunctionalProperty>
x p z
-------------------------------
y <owl:sameAs> z
x p z
p <rdf:type> <owl:FunctionalProperty>
x p y
-------------------------------
y <owl:sameAs> z
Here, the last two variants are identical apart from the rotation of
variables y
and z
, so one of these variants is not needed. The
use of the Cut
operator above tells the rule compiler to
eliminate this last variant, i.e., the one beginning with the premise x
p z
.
The use of context in rule bodies and rule heads is also best explained
by an example. The following three rules implement the OWL2-RL property
chain rule prp-spo2
, and are inspired by the Rule Interchange Format
(RIF) implementation:
Id: prp-spo2_1
p <owl:propertyChainAxiom> pc
start pc last [Context <onto:_checkChain>]
----------------------------
start p last
Id: prp-spo2_2
pc <rdf:first> p
pc <rdf:rest> t [Constraint t != <rdf:nil>]
start p next
next t last [Context <onto:_checkChain>]
----------------------------
start pc last [Context <onto:_checkChain>]
Id: prp-spo2_3
pc <rdf:first> p
pc <rdf:rest> <rdf:nil>
start p last
----------------------------
start pc last [Context <onto:_checkChain>]
The RIF rules that implement prp-spo2
use a relation (unrelated to the
input or generated triples) called _checkChain
. The GraphDB
implementation maps this relation to the ‘invisible’ context of the same
name with the addition of [Context <onto:_checkChain>]
to certain
statement patterns. Generated statements with this context can only be
used for bindings to rule premises when the exact same context is
specified in the rule premise. The generated statements with this
context will not be used for any other rules.
Inequality constraints in rules check if a variable is bound to a blank node. If it is not, then the inference rule will fire:
Id: prp_dom
a <rdfs:domain> b
c a d
------------------------------------
c <rdf:type> b [Constraint p != blank_node]
Same as optimization¶
The built-in OWL property owl:sameAs
indicates that two URI references actually refer to the same thing. The following lines express the transitive and symmetric semantics of the rule:
/**
Id: owl_sameAsCopySubj
// Copy of statement over owl:sameAs on the subject. The support for owl:sameAs
// is implemented through replication of the statements where the equivalent
// resources appear as subject, predicate, or object. See also the couple of
// rules below
//
x <owl:sameAs> y [Constraint x != y]
x p z //Constraint p [Constrain p != <owl:sameAs>]
-------------------------------
y p z
Id: owl_sameAsCopyPred
// Copy of statement over owl:sameAs on the predicate
//
p <owl:sameAs> q [Constraint p != q]
x p y
-------------------------------
x q y
Id: owl_sameAsCopyObj
// Copy of statement over owl:sameAs on the object
//
x <owl:sameAs> y [Constraint x != y]
z p x //Constraint p [Constrain p != <owl:sameAs>]
-------------------------------
z p y
**/
So, all nodes in the transitive and symmetric chain make relations to all other nodes, i.e., the relation coincides with the Cartesian \(NxN\), hence the full closure contains \(N^2\) statements. GraphDB optimizes the generation of excessive links by nominating an equivalence class representative to represent all resources in the symmetric and transitive chain. By default, the owl:sameAs
optimization is enabled in all rulesets except when the ruleset is empty
, rdfs
, or rdfsplus
. For additional information, check Optimization of owl:sameAs.
Consistency checks¶
Consistency checks are used to ensure that the data model is in a
consistent state and are applied whenever an update transaction is
committed. GraphDB supports consistency
violation checks using standard OWL2-RL semantics. You can define rulesets that contain consistency rules. When creating a new repository,
set the check-for-inconsistencies configuration parameter to true
. It is
false
by default.
The syntax is similar to that of rules, except that Consistency
replaces the Id
tag that introduces normal rules. Also, consistency
checks do not have any consequences and indicate an inconsistency
whenever their premises can be satisfied, e.g.:
Consistency: something_can_not_be_nothing
x rdf:type owl:Nothing
-------------------------------
Consistency: both_sameAs_and_differentFrom_is_forbidden
x owl:sameAs y
x owl:differentFrom y
-------------------------------
Consistency checks features
Materialization and consistency mix: the rulesets support the definition of a mixture of materialization and consistency rules. This follows the existing naming syntax
id:
andConsistency:
Multiple named rulesets: GraphDB supports multiple named rulesets.
No downtime deployment: The deployment of new/updated rulesets can be done to a running instance.
Update transaction ruleset: Each update transaction can specify which named ruleset to apply. This is done by using ‘special’ RDF statements within the update transaction.
Consistency violation exceptions: if a consistency rule is violated, GraphDB throws exceptions. The exception includes details such as which rule has been violated and to which RDF statements.
Consistency rollback: if a consistency rule is violated within an update transaction, the transaction will be rolled back and no statements will be committed.
In case of any consistency check(s) failure, when a transaction is committed and consistency checking is switched on (by default it is off), then:
A message is logged with details of what consistency checks failed;
An exception is thrown with the same details;
The whole transaction is rolled back.
Rulesets¶
GraphDB offers several predefined semantics by way of standard rulesets (files), but can also be configured to use custom rulesets with semantics better tuned to the particular domain. The required semantics can be specified through the ruleset for each specific repository instance. Applications that do not need the complexity of the most expressive supported semantics can choose one of the less complex, which will result in faster inference.
Note
Each ruleset defines both rules and some schema statements, otherwise known as axiomatic triples. These (read-only) triples are inserted into the repository at initialization time and count towards the total number of reported ‘explicit’ triples. The variation may be up to the order of hundreds depending upon the ruleset.
Predefined rulesets¶
The pre-defined rulesets provided with GraphDB cover various well-known knowledge representation formalisms, and are layered in such a way that each extends the preceding one.
Ruleset |
Description |
---|---|
empty |
No reasoning, i.e., GraphDB operates as a plain RDF store. |
rdfs |
Supports the standard model-theoretic RDFS semantics. This includes support for |
rdfsplus |
Extended version of RDFS with the support also symmetric, inverse and transitive properties, via the OWL vocabulary: |
owl-horst |
OWL dialect close to OWL-Horst - essentially |
owl-max |
RDFS and that part of OWL Lite that can be captured in rules (deriving functional and inverse functional properties, all-different, subclass by |
owl2-ql |
The OWL2-QL profile - a fragment of OWL2 Full designed so that sound and complete query answering is LOGSPACE with respect to the size of the data. This OWL2 profile is based on DL-LiteR, a variant of DL-Lite that does not require the unique name assumption. |
owl2-rl |
The OWL2-RL profile - an expressive fragment of OWL2 Full that is amenable for implementation on rule engines. |
Note
Not all rulesets support data-type reasoning, which is the main
reason why OWL-Horst is not the same as pD*
. The ruleset you
need to use for a specific repository is defined through the
ruleset parameter. There are optimized versions of all rulesets
that avoid some little used inferences.
Note
The default ruleset is RDFS-Plus (optimized).
OWL2-QL non-conformance¶
The implementation of OWL2-QL is non-conformant with the W3C OWL2 profiles recommendation as shown in the following table:
Conformant behavior |
Implemented behavior |
---|---|
Given a list of disjoint (data or object) properties and an entity that is related with these properties to objects |
For each pair |
For each class C in the knowledge base, infer the existence of an anonymous class that is the union of a list of classes containing only C. |
Not supported. Even if this infinite expansion were possible in a forward chaining rule-based implementation, the resulting statements are of no use during query evaluation. |
If a instance of C1, and b instance of C2, and C1 and C2 disjoint, infer:
|
Impractical for knowledge bases with many members of pairs of disjoint classes, e.g., Wordnet. Instead, this is implemented as a consistency check: If x instance of C1 and C2, and C1 and C2 disjoint, then inconsistent. |
Custom rulesets¶
GraphDB has an internal rule compiler that can be configured with a
custom set of inference rules and axioms. You may define a custom
ruleset in a .pie
file (e.g., MySemantics.pie
). The easiest way
to create a custom ruleset is to start modifying one of the .pie
files that were used to build the precompiled rulesets.
Note
All pre-defined .pie
files are included in configs/rules
folder of the GraphDB distribution.
If the code generation or compilation cannot be completed successfully,
a Java exception is thrown indicating the problem. It will state either
the Id
of the rule, or the complete line from the source file where
the problem is located. Line information is not preserved during the
parsing of the rule file.
You must specify the custom ruleset via the ruleset configuration parameter. There are optimized versions of all rulesets. The value of the ruleset
parameter is interpreted as a
filename and .pie
is appended when not present. This file is processed
to create Java source code that is compiled using the compiler from the
Java Development Kit (JDK). The compiler is invoked using the mechanism
provided by the JDK version 1.6 (or later).
Therefore, a prerequisite for using custom rulesets is that you use the
Java Virtual Machine (JVM) from a JDK version 1.6 (or later) to run the
application. If all goes well, the class is loaded dynamically and
instantiated for further use by GraphDB during inference. The
intermediate files are created in the folder that is pointed by the
java.io.tmpdir
system property. The JVM should have sufficient
rights to read and write to this directory.
Note
Using GraphDB, this is more difficult. It will be necessary to export/backup all explicit statements and recreate a new repository with the required ruleset. Once created, the explicit statements exported from the old repository can be imported to the new one.
Inference¶
Reasoner¶
The GraphDB reasoner requires a .pie
file of each ruleset to be
compiled in order to instantiate. The process includes several steps:
Generate a java code out of the
.pie
file contents using the built-in GraphDB rule compiler.Compile the java code (it requires JDK instead of JRE, hence the java compiler will be available through the standard java instrumentation infrastructure).
Instantiate the java code using a custom byte-code class loader.
Note
GraphDB supports dynamic extension of the reasoner with new rulesets.
Rulesets execution¶
For each rule and each premise (triple pattern in the rule head), a
rule variant
is generated. We call this the ‘leading premise’ of the variant. If a premise has theCut
annotation, no variant is generated for it.Every incoming triple (inserted or inferred) is checked against the leading premise of every rule variant. Since rules are compiled to Java bytecode on startup, this checking is very fast.
If the leading premise matches, the rest of the premises are checked. This checking needs to access the repository, so it can be much slower.
GraphDB first checks premises with the least number of unbound variables.
For premises that have the same number of unbound variables, GraphDB follows the textual order in the rule.
If all premises match, the conclusions of the rule are inferred.
For each inferred statement:
If it does not exist in the default graph, it is stored in the repository and is queued for inference.
If it exists in the default graph, no duplicate statement is recorded. However, its ‘inferred’ flag is still set. (see How to manage explicit and implicit statements).
Retraction of assertions¶
GraphDB stores explicit and implicit statements, i.e., the statements inferred (materialized) from the explicit statements. So, when explicit statements are removed from the repository, any implicit statements that rely on the removed statement must also be removed.
In the previous versions of GraphDB, this was achieved with a re-computation of the full closure (minimal model), i.e., applying the entailment rules to all explicit statements and computing the inferences. This approach guarantees correctness, but does not scale - the computation is increasingly slow and computationally expensive in proportion to the number of explicit statements and the complexity of the entailment ruleset.
Removal of explicit statements is now achieved in a more efficient manner, by invalidating only the inferred statements that can no longer be derived in any way.
One approach is to maintain track information for every statement - typically the list of statements that can be inferred from this statement. The list is built up during inference as the rules are applied and the statements inferred by the rules are added to the lists of all statements that triggered the inferences. The drawback of this technique is that track information inflates more rapidly than the inferred closure - in the case of large datasets up to 90% of the storage is required just to store the track information.
Another approach is to perform backward chaining. Backward chaining does not require track information, since it essentially re-computes the tracks as required. Instead, a flag for each statement is used so that the algorithm can detect when a statement has been previously visited and thus avoid an infinite recursion.
The algorithm used in GraphDB works as follows:
Apply a ‘visited’ flag to all statements (
false
by default).Store the statements to be deleted in the list L.
For each statement in L that is not visited yet, mark it as visited and apply the forward chaining rules. Statements marked as visited become invisible, which is why the statement must be first marked and then used for forward chaining.
If there are no more unvisited statements in L, then END.
Store all inferred statements in the list L1.
For each element in L1 check the following:
If the statement is a purely implicit statement (a statement can be both explicit and implicit and if so, then it is not considered purely implicit), mark it as deleted (prevent it from being returned by the iterators) and check whether it is supported by other statements. The
isSupported()
method uses queries that contain the premises of the rules and the variables of the rules are preliminarily bound using the statement in question. That is to say, theisSupported()
method starts from the projection of the query and then checks whether the query will return results (at least one), i.e., this method performs backward chaining.If a result is returned by any query (every rule is represented by a query) in
isSupported()
, then this statement can be still derived from other statements in the repository, so it must not be deleted (its status is returned to ‘inferred’).If all queries return no results, then this statement can no longer be derived from any other statements, so its status remains ‘deleted’ and the number of statements counter is updated.
L := L1 and GOTO 3.
Special care is taken when retracting owl:sameAs
statements, so that
the algorithm still works correctly when modifying equivalence classes.
Note
One consequence of this algorithm is that deletion can still have poor performance when deleting schema statements, due to the (probably) large number of implicit statements inferred from them.
Note
The forward chaining part of the algorithm terminates as soon as it detects that a statement is read-only, because if it cannot be deleted, there is no need to look for statements derived from it. For this reason, performance can be greatly improved when all schema statements are made read-only by importing ontologies (and OWL/RDFS vocabularies) using the imports repository parameter.
Schema update transactions¶
When fast statement retraction is required, but it is also necessary to update schemas, you can use a special statement pattern. By including an insert for a statement with the following form in the update:
[] <http://www.ontotext.com/owlim/system#schemaTransaction> []
GraphDB will use the smooth-delete algorithm, but will also traverse read-only statements and allow them to be deleted/inserted. Such transactions are likely to be much more computationally expensive to achieve, but are intended for the occasional, offline update to otherwise read-only schemas. The advantage is that fast-delete can still be used, but no repository export and import is required when making a modification to a schema.
For any transaction that includes an insert of the above special predicate/statement:
Read-only (explicit or inferred) statements can be deleted;
New explicit statements are marked as read-only;
New inferred statements are marked:
Read-only if all the premises that fired the rule are read-only;
Normal otherwise.
Schema statements can be inserted or deleted using SPARQL UPDATE as follows:
DELETE {
# [[schema statements to delete]]
}
INSERT {
[] <http://www.ontotext.com/owlim/system#schemaTransaction> [] .
# [[schema statements to insert]]
}
WHERE { }
How To’s¶
Operations on rulesets¶
All examples below use the sys: namespace
, defined as:
prefix sys: <http://www.ontotext.com/owlim/system#>
Add a custom ruleset from .pie file¶
The predicate sys:addRuleset
adds a custom ruleset from the specified .pie
file.
The ruleset is named after the filename, without the .pie
extension.
- Example 1
This creates a new ruleset ‘test’. If the absolute path to the file resides on, for example,
/opt/rules/test.pie
, it can be specified as<file:/opt/rules/test.pie>
,<file://opt/rules/test.pie>
, or<file:///opt/rules/test.pie>
, i.e., with 1, 2, or 3 slashes. Relative paths are specified without the slashes or with a dot between the slashes:<file:opt/rules/test.pie>
,<file:/./opt/rules/test.pie>
,<file://./opt/rules/test.pie>
, or even<file:./opt/rules/test.pie>
(with a dot in front of the path). Relative paths can be used if you know the work directory of the Java process in which GraphDB runs.INSERT DATA { _:b sys:addRuleset <file:c:/graphdb/test-data/test.pie> }
- Example 2
Same as above but creates a ruleset called ‘custom’ out of the
test.pie
file found in the given absolute path.INSERT DATA { <_:custom> sys:addRuleset <file:c:/graphdb/test-data/test.pie> }
- Example 3
Retrieves the
.pie
file from the given URL. Again, you can use<_:custom>
to change the name of the ruleset to “custom” or as necessary.INSERT DATA { _:b sys:addRuleset <http://example.com/test-data/test.pie> }
Add a built-in ruleset¶
The predicate sys:addRuleset
adds a built-in ruleset (one of the rulesets
that GraphDB supports natively).
- Example
This adds the
"owl-max"
ruleset to the list of rulesets in the repository.INSERT DATA { _:b sys:addRuleset "owl-max" }
Add a custom ruleset with SPARQL INSERT¶
The predicate sys:addRuleset
adds a custom ruleset from the specified .pie
file.
The ruleset is named after the filename, without the .pie
extension.
- Example
This creates a new ruleset
"custom"
.INSERT DATA { <_:custom> sys:addRuleset '''Prefices { a : http://a/ } Axioms {} Rules { Id: custom a b c a <a:custom1> c ----------------------- b <a:custom1> a }''' }
Note
Effects on the axiom set
When dealing with more than one ruleset, the result set of axioms is the UNION of all axioms of rulesets added so far. There is a special kind of statements that behave much like axioms in the sense that they can never be removed: <P rdf:type rdf:Property>
, <P rdfs:subPropertyOf P>
, <X rdf:type rdfs:Resource>
.
These statements enter the repository just once - at the moment the property or resource is met for the first time, and remain in the repository forever, even if there are no more nodes related to that particular property or resource. (See Rules optimizations)
List all rulesets¶
The predicate sys:listRulesets
lists all rulesets available in the repository.
- Example
SELECT ?state ?ruleset { ?state sys:listRulesets ?ruleset }
Explore a ruleset¶
The predicate sys:exploreRuleset
explores a ruleset.
- Example
SELECT * { ?content sys:exploreRuleset "test" }
Set a default ruleset¶
The predicate sys:defaultRuleset
switches the default ruleset to the one
specified in the object literal.
- Example
This sets the default ruleset to “test”. All transactions use this ruleset, unless they specify another ruleset as a first operation in the transaction.
INSERT DATA { _:b sys:defaultRuleset "test" }
Rename a ruleset¶
The predicate sys:renameRuleset
renames the ruleset from “custom” to “test”.
Note that “custom” is specified as the subject URI in the default namespace.
- Example
This renames the ruleset “custom” to “test”.
INSERT DATA { <_:custom> sys:renameRuleset "test" }
Delete a ruleset¶
The predicate sys:removeRuleset
deletes the ruleset "test"
specified
in the object literal.
- Example
INSERT DATA { _:b sys:removeRuleset "test" }
Note
Effects on the axiom set when removing a ruleset
When removing a ruleset, we just remove the mapping from the ruleset name to the corresponding inferencer. The axioms stay untouched.
Consistency check¶
The predicate sys:consistencyCheckAgainstRuleset
checks if the repository
is consistent with the specified ruleset.
- Example
INSERT DATA { _:b sys:consistencyCheckAgainstRuleset "test" }
Reinferring¶
Statements are inferred only when you insert new statements. So, if reconnected to a repository with a different ruleset, it does not take effect immediately. However, you can cause reinference with an Update statement such as:
INSERT DATA { [] <http://www.ontotext.com/owlim/system#reinfer> [] }
This removes all inferred statements and reinfers from scratch using the current ruleset. If a
statement is both explicitly inserted and inferred, it is not removed.
Statements of the type <P rdf:type rdf:Property>
, <P rdfs:subPropertyOf P>
, <X rdf:type rdfs:Resource>
, and the axioms from all rulesets will stay untouched.
Tip
To learn more, see How to manage explicit and implicit statements.
Provenance¶
GraphDB’s Provenance plugin enables the generation of inference closure from a specific named graph at query time. This is useful in situations where you want to trace what the implicit statements generated from a specific graph are and the axiomatic triples part of the configured ruleset, i.e., the ones inserted with a special predicate sys:schemaTransaction
. Find more about it in the plugin’s documentation.