Ontology Alignment Evaluation Initiative - OAEI-2015 Campaign

Ontology Alignment for Query Answering (OA4QA)

General description

This track will not follow the classical ontology alignment evaluation with respect to a set of reference alignments. Precision and recall will be calculated with respect to the ability of the generated alignments to answer a set of queries in a ontology-based data access scenario where several ontologies exist.

Dataset and queries

In the OAEI 2015 campaign the dataset is based on the Conference track. We have populated some of the Conference ontologies with (synthetic) ABoxes extracted from the DBLP dataset and we have defined a set of queries over these ABoxes. For example, given the query Q(x):=Author(x) expressed using the vocabulary of Cmt ontology, Ekaw ontology has been enriched with synthetic data; the query Q(x) is executed over the aligned ontology Cmt ∪ Ekaw ∪ M where M is an alignment between Cmt and Ekaw conference ontologies.

We differentiate three categories of queries: (i) basic queries that should be successful for most of the systems, (ii) queries for which the semantic consequences of the aligned ontology will be critical, and (iii) advanced queries for which nontrivial mappings are required.

The evaluation framework together with a subset of the dataset and queries that will compose the OA4QA evaluation can be found here (23Mb) (last modified: 14/06/2015).

Query evaluation engine

The evaluation engine considered is an extensions of the OWL 2 reasoner HermiT, known as OWL-BGP [1]. OWL-BGP is able to process SPARQL queries in the SPARQL-OWL fragment, under the OWL 2 Direct Semantics entailment regime. The queries employed in the OA4QA track are standard conjunctive queries, that are fully supported by the more expressive SPARQL-OWL fragment.

Evaluation metrics and reference answer set

The evaluation metrics used for the OA4QA track are the classic information retrieval ones (i.e., precision, recall and f-measure), but on the result set of the query evaluation. In order to compute a reference or model answer set for the query results, the publicly available reference alignment (RA1) of the Conference track has been used. For example, given a query and an ontology pair (e.g. cmt and ekaw), a model (or reference) answer set will be computed using the correspondent reference alignment for the ontology pair (e.g. ra1_cmt_ekaw).

An alternative reference answer set has also been computed after repairing the reference alignment RA1 from conservativity and consistency violations [2]. The repaired RA1 is called RAR1.

Precision and recall are calculated with respect to these aforementioned model answer sets, i.e. for each ontology pair (O1 and O2) and query Q(x), and for each alignment M computed by the different matching systems participating in the Conference track. We compare (in terms of precision and recall) the answer set of Q(X) executed over O1 ∪ O2 ∪ M with respect to the result sets of running the same query Q(x) over O1 ∪ O2 ∪ RA1 and O1 ∪ O2 ∪ RAR1.

Note that the occurrence of unsatisfiable classes in O1 ∪ O2 ∪ M will have a critical impact in the query answering process.

References

  1. Ilianna Kollia, Birte Glimm, Ian Horrocks: SPARQL query answering over OWL ontologies. In: The Semantic Web: Research and Applications. Springer. 2011. [pdf]
  2. Alessandro Solimando, Ernesto Jimenez-Ruiz, and Giovanna Guerrini. Detecting and Correcting Conservativity Principle Violations in Ontology-to-Ontology Mappings. International Semantic Web Conference (ISWC). 2014. [pdf]

Contact: alessandro [.] solimando [at] unige [.] it or ernesto [at] cs [.] ox [.] ac [.] uk

Original page: http://www.cs.ox.ac.uk/isg/projects/Optique/oaei/oa4qa/2015/index.html [cached: 13/05/2016]