Ontology Alignment Evaluation Initiative - OAEI-2018 CampaignOAEI OAEI

Complex track

General description

Complex alignments are more expressive than simple alignments as their correspondences can contain logical constructors or transformation functions of literal values.

For example, given two ontologies o1 and o2:

With this track, we evaluate systems which can generate such correspondences.

The complex track contains 5 datasets about 4 different domains: Conference and Populated Conference, Hydrography, GeoLink and Taxon. Each of the dataset and the evaluation methods are presented below.

The participants of the track should output their (complex) correspondences in the EDOAL format. This format is supported by the Alignment API. The evaluation will be supported by the SEALS platform. The participants have to wrap their tool against the SEALS client as described at SEALS evaluation for OAEI 2018. For executing the tasks in each dataset the parameters are listed in boxes below (repository, suite-id, version-id).

The number of ontologies, simple (1:1) and complex (1:n), (m:n) correspondences for each dataset of this track are summarized in the following table.

Dataset #Ontologies #(1:1) #(1:n) #(m:n)
Conference 3 78 79 0
Populated Conference 5 111 86 98
Hydrography 4 113 69 15
GeoLink 2 19 5 43
Taxon 4 6 17 3


The schedule is available at the OAEI main page.

Datasets and Evaluation Modalities

Conference dataset

Ontologies and correspondences

This dataset is based on the OntoFarm dataset [1] used in the Conference track of the OAEI campaigns. It is composed of 16 ontologies on the conference organisation domain and simple reference alignments between 7 of them. Here, we consider 3 out of the 7 ontologies from the reference alignments (cmt, conference and ekaw), resulting in 3 alignment pairs.

Conference Testsuite

The correspondences were manually curated by 3 experts following the query rewriting methodology in [2]. For each pair o1-o2 of ontologies, the following steps were applied:

4 experts assessed the curated correspondences to reach a consensus.

Evaluation modalities

The complex correspondences output by the systems will be manually compared to the ones of the consensus alignment.

For this first evaluation, only equivalence correspondences will be evaluated and the confidence of the correspondenes will not be taken into account.

The systems can take the ra1 simple alignments as input. The ra1 alignments can be downloaded here.

Populated Conference dataset

In order to allow matchers which rely on instances to participate over the Conference complex track, we propose a populated version of the Conference dataset. 5 ontologies have been populated with more or less common instances resulting in 6 datasets: (6 versions on the seals repository: v0, v20, v40, v60, v80 and v100).

Populated Conference Testsuite

Evaluation modalities

The alignments will be evaluated based on Competency Questions for Alignment: basic queries that the alignment should be able to cover [6].

The queries are automatically rewritten using 2 systems:

The best rewritten query scores are kept.

A precision score will be given by comparing the instances described by the source and target members of the correspondences.

Details on the population and evaluation modalities are given at: https://framagit.org/IRIT_UT2J/conference-dataset-population.

Hydrography dataset

Ontologies and correspondences

The hydrography dataset is composed of four source ontologies, which are Hydro3, HydrOntology_native, HydrOntology_translated, and Cree, that each should be aligned to a single target Surface Water Ontology (SWO). The source ontologies vary in their similarity to the target ontology -- Hydro3 is similar in both language and structure, hydrOntology is similar in structure but is in Spanish rather than English, and Cree is very different in terms of both language and structure. All ontologies can be downloaded at once here.


The alignments were created by a geologist and an ontologist, in consultation with a native Spanish speaker regarding the hydrOntology, and consist of logical relations.


There are three subtasks in the Hydrography complex alignment track:

  1. Entity Identification

    For each entity in the source ontology, the alignment system is asked to list all of the entities in the target ontologies that are related to it in some way.

    For example:

    owl:equivalentClasses(ont1:A1 owl:intersectionOf(ont2:B1 owl:someValuesFrom(ont2:B2 ont2:B3))

    The goal in this task is to find the most relevant entities in the ont2 to the class ont1:A1. In this case, the best output would be ont2:B1, ont2:B2, and ont2:B3.

  2. Relationship Identification

    For each alignmnet, the system should then endeavor to find the concrete relationships, such as equivalence, subsumption, intersection, value restriction, and so on, that hold between the entities. In terms of the example above, an alignment system needs to eventually determine that the relationship between the two sides is equivalence.

  3. Full complex alignment Identification

    This task is a combination of the two former steps.

Evaluation modalities

After we collect the results from matching systems, we plan to utilize relaxed precision and recall [5] as the metrics to evaluate the performance for three tasks. The full reference alignment can be downloaded from here.

GeoLink dataset

Ontologies and correspondences

This dataset is from the GeoLink project, which was funded under the U.S. National Science Foundation's EarthCube initiative. It is composed of two ontologies: the GeoLink Base Ontology (GBO) and the GeoLink Modular Ontology (GMO). The GeoLink project is a real-world use case of ontologies, and its instance data is available. The ontologies with the populated instance data can be also downloaded (here). The alignment between the two ontologies was developed in consultation with domain experts from several geoscience research institutions. This alignment is a slightly simplified version of the one discussed in [4]. The relations that involve punning have been removed due to a concern that many automated alignment systems would not consider these as potential mappings. More details can be found in [4].



The same three subtasks as described for the hydrography dataset apply to this dataset also.

Evaluation modalities

The evaluation of the systems will be performed by computing relaxed precision and recall for three tasks. The reference alignment can be downloaded from here.

Taxon dataset

Ontologies and correspondences

The Taxon dataset is composed of 4 ontologies which describe the classification of species: AgronomicTaxon, Agrovoc, DBpedia and TaxRef-LD. All the ontologies are populated. The common scope of these ontologies is plant taxonomy. The alignments were manually created with the help of one expert and involve only logical constructors. This dataset extends the one proposed in [3] by adding the TaxRef-LD ontology.

Evaluation modalities

The evaluation of this dataset is task-oriented. We will evaluate the generated correspondences using a SPARQL query rewriting system and manually measure their ability of answering a set of queries over each dataset. The alignments have to be in EDOAL. The systems will be evaluated on a subset of the dataset (common scope). The evaluation is blind.



[1] Ondřej Zamazal, Vojtěch Svátek. The Ten-Year OntoFarm and its Fertilization within the Onto-Sphere. Web Semantics: Science, Services and Agents on the World Wide Web, 43, 46-53. 2017.

[2] Élodie Thiéblin, Ollivier Haemmerlé, Nathalie Hernandez, Cassia Trojahn. Task-Oriented Complex Ontology Alignment: Two Alignment Evaluation Sets. In : European Semantic Web Conference. Springer, Cham, 655-670, 2018.

[3] Élodie Thiéblin, Fabien Amarger, Nathalie Hernandez, Catherine Roussey, Cassia Trojahn. Cross-querying LOD datasets using complex alignments: an application to agronomic taxa. In: Research Conference on Metadata and Semantics Research. Springer, Cham, 25-37, 2017.

[4] Lu Zhou, Michelle Cheatham, Adila Krisnadhi, Pascal Hitzler. A Complex Alignment Benchamark: GeoLink Dataset. In: International Semantic Web Conference. Springer, Proceedings, Part II, pp. 273-288, 2018.

[5] Marc Ehrig, and Jérôme Euzenat. "Relaxed precision and recall for ontology matching." K-CAP 2005 Workshop on Integrating Ontologies, Banff, Canada, 2005.

[6] Élodie Thiéblin. Do competency questions for alignment help fostering complex correspondences?. In EKAW Doctoral Consortium, 2018.

[7] Élodie Thiéblin, Fabien Amarger, Ollivier Haemmerlé, Nathalie Hernandez, Cassia Trojahn. Rewriting SELECT SPARQL queries from 1:n complex correspondences. In: Ontology Matching, pp. 49-60, 2016.