Inference

What is inference?

Inference is the derivation of new knowledge from existing knowledge and axioms. In an RDF database, such as GraphDB, inference is used for deducing further knowledge based on existing RDF data and a formal set of inference rules.

Inference in GraphDB

GraphDB supports inference out of the box and provides updates to inferred facts automatically. Facts change all the time and the amount of resources it would take to manually manage updates or rerun the inferencing process would be overwhelming without this capability. This results in improved query speed, data availability and accurate analysis.

Inference uncovers the full power of data modelled with RDF(S) and ontologies. GraphDB will use the data and the rules to infer more facts and thus produce a richer data set than the one you started with.

GraphDB can be configured via “rule-sets” – sets of axiomatic triples and entailment rules – that determine the applied semantics. The implementation of GraphDB relies on a compile stage, during which the rules are compiled into Java source code that is then further compiled into Java bytecode and merged together with the inference engine.

Standard rule-sets

The GraphDB inference engine provides full standard-compliant reasoning for RDFS, OWL-Horst, OWL2-RL and OWL2-QL.

To apply a rule-set, simply choose from the options in the pull-down list when configuring your repository as shown below through GraphDB Workbench:

_images/Inference1.png

Custom rule-sets

GraphDB also comes with support for custom rule-sets that allow for custom reasoning through the same performance optimised inference engine. The rule-sets are defined via .pie files.

To load custom rule-sets, simply point to the location of your .pie file as shown below:

_images/Inference2.png