One of the main features of OWL is its sound logical basis: in fact, the design of OWL is influenced by many decades of research in logic-based knowledge representation, automated reasoning, and non-classical logics. The result is a formalism that comes with (a) a well-understood semantics and (b) reasoning problems whose computational complexity is well-understood and for which various reasoning algorithms have been developed, investigated, implemented, and are used. But it is also a formalism that, from a logics perspective, is quite unusual in various aspects. This post will provide an overview — thus incomplete by definition — of OWL for readers familiar with logic, and it will concentrate on OWL DL and OWL with direct semantics (thus ignoring OWL full and OWL with RDF-semantics).
OWL (both OWL DL and OWL 2) can be seen as fragments of First Order logic for which reasoning is decidable. In this sense, an ontology can be viewed as a finite set of First Order logic axioms that conform to certain syntactic restrictions. Here are a couple of observations about this fragment:
- in the Description Logic community, this fragment is known as SHOIQ(D) or SROIQ(D),
- it is closely related to modal logics and the guarded fragment in that it only allows for a guarded form of quantification,
- it is closely related to hybrid logics in that it allows for nominals,
- it only allows for unary and binary predicates, and uses a variable-free syntax,
- it extends propositional logic, i.e., OWL provides full Boolean operators,
- it lacks the finite model property (i.e., we can write down a consistent, finite ontology all of whose models have infinitely many elements)
- it has a form of tree model property (i.e., any consistent ontology has a model that is mostly tree-shaped)
- it is a logic for which, e.g., deciding satisfiability or entailments is decidable, and at least NExpTime-hard (see this paper for further pointers).
From a pure logic perspective, an OWL ontology is of a somewhat unusual form: it is a finite set of axioms that are either universally quantified formulae of the form “forall x.C(x) implies D(x)”, or ground facts of the form “C(a)”, “R(a,b)” , where C and D are complex expressions built using Boolean operators and the above mentioned guarded quantification, e.g., “E(x) and exists y.(R(x,y) and F(y))” or “E(x) or forall y.(R(x,y) implies G(y))”. There are other forms of axiom, e.g., “forall x,y,z.R(x,y) and S(y,z) implies T(x,z)”, but the above three forms should suffice for this taster. Also, in addition to the (guarded) existential and universal quantification, OWL supports counting quantifiers.
The semantics of (the Description Logic corresponding to) OWL can be given either by a translation of OWL into First Order Logic (e.g., “C SubClassOf D and R some E” is an OWL axiom in Manchester syntax that corresponds to “forall x.C(x) implies (D(x) and exists y.(R(x,y) and E(y))”), or directly by saying (i) what an interpretation is, (ii) how to interpret complex expressions such as “D and E” or “R some E”, and (iii) when an interpretation satisfies axioms. These two semantics are completely equivalent and preferring one over the other is simply a matter of representational taste. The entailment relationship in OWL is, unsurprisingly, also classic: an axiom is entailed by an ontology if it holds in all models of the ontology , i.e., if it is true in all interpretations that satisfy all axioms in the ontology. As a consequence, OWL is monotonic: the more axioms you add to your ontology, the more you restrict its models, and the more axioms are entailed.
Another thing that is unusual about OWL is the standard reasoning problems: given an ontology O, a DL reasoner (i.e., a theorem prover that implements a decision procedure), is assumed to be able to carry out any of these tasks, and to provide them through a standard API such as the OWL API:
- test consistency of O, i.e., does O have a model?
- test coherency of O: for each class name (unary predicate) A occurring in O, test whether A is satisfiable w.r.t. O.
- compute the inferred class hierarchy: for each pair of class names A, B in O, test whether O entails “forall x.A(x) implies B(x)”. The resulting quasi order on class names is usually presented to the user as a hierarchy with equivalent classes collapsed into a single node.
- determine instance relationship: for each individual name (constant) e in O, determine those class names A such that O entails “A(e)”.
- answering queries: for a (possibly complex) class expression C (or conjunctive query), return all those individuals e such that O entails “C(e)”.
These reasoning problems can all be reduced to testing consistency — however, DL reasoner implementors had to develop sophisticated optimisations to avoid, e.g., to test n^2 implications to compute the class hierarchy for n class names: given that state-of-the-art ontologies contain tens of thousands, even hundreds of thousands of classes, such optimisations are crucial.
In addition, a number of ‘non-standard’ reasoning tasks have been investigated around (the logics underpinning) OWL, e.g., module extraction, explanation, unification, least-common-subsumer, etc. And an impressive suite of tools has been developed around OWL, including editors, e.g., Protégé 4, visualisers, browsers, e.g., OWLSight, etc. that are aimed at providing domain experts with a suitable interaction mechanism with an ontology and reasoners. And then there is the afore mentioned OWL API that provides tool developers with a means to interact with an ontology and reasoners.
As mentioned above, an OWL ontology contains both “class-level”, universally quantified, intensional axioms, e.g., “forall x.C(x) implies D(x)” and ground facts, e.g., “C(a)”. In the underlying Description Logics, we would distinguish between these two kind of axioms and store the former in the TBox (for Terminology) and the latter in the ABox (for Assertional). In an OWL ontology, this distinction is not made.
In OWL, we have a couple of things that we do not normally find in logics:
- annotations: we can annotate an ontology, an axiom, a class, a property, etc. This is useful to give different labels to classes (e.g., terms in different languages, preferred terms, etc.) or to keep track of who introduced an axiom or a term.
- imports: we can save one ontology in different files with explicit ‘imports’ instructions telling an OWL parser how to assemble the whole ontology. This is handy for re-use and allows to work on smaller pieces of an otherwise large ontology.
- datatypes: strings, integers, reals, etc., can easily occur in an ontology as attributes, say, of persons (e.g., a person’s age, their name, etc.). OWL supports XML datatypes, e.g., supports statements such as “hasLastName(p17, “Smith”)” and even implications such as “forall x. Child(x) and hasAge(x,y) implies y <= 18”.
- structural equivalence: a notion of when two axioms are ‘syntactically equivalent’.
- different syntaxes: OWL has different syntaxes for different uses (e.g., RDF/XML for interchange, OWL/XML for processing by XML tools, Manchester syntax for reading/writing ontologies, etc.), and parsers and translators to translate from one to the other are available.
- three profiles: these are fragments designed for specific purposes. There is OWL 2 EL for large (what is known in DLs as) TBoxes, which is a sub-Boolean, Horn fragment with existential guarded quantification. Then there is OWL 2 QL for large (what is known in DLs as) ABoxes and lightweight TBoxes. For these two fragments, query answering can be implemented using standard relational database technology (assuming that the ABox is stored in a database) — for the former with, the latter without changes to the database. And then there is OWL 2 RL, for which reasoning can be implemented using a rules engine.
So, in this sense OWL is more than what you would normally expect from a logic.