Ontology Alignment Evaluation Initiative - OAEI 2018 CampaignOAEI
OAEI 2018 results available here.
Special issue announced. See 2016 special issue papers

Ontology Alignment Evaluation Initiative

2018 Campaign

Since 2004, OAEI organises evaluation campaigns aiming at evaluating ontology matching technologies. This year we will combine tracks running under the SEALS platform and tracks running under the HOBBIT platform. Some tracks will allow both types of participation.

Please check the organizing committee and main contacts of the OAEI 2018 campign.

We are having a special issue devoted to the participants and evaluation of the OAEI campaigns in the Knowledge Engineering Review journal. As usual, participants will also be invited to present their results during the Ontology Matching Workshop 2018.

See list of papers published in the Journal of Biomedical Semantics as part of the 2016 special issue on Ontology Alignment in Life Sciences.

Problems

The OAEI 2018 campaign will once again confront ontology matchers to ontology and data sources to be matched. This year, the following test sets are available:

T-Box/Schema matching

anatomy
The anatomy real world case is about matching the Adult Mouse Anatomy (2744 classes) and the NCI Thesaurus (3304 classes) describing the human anatomy.
conference
The goal of the track is to find alignments within a collection of ontologies describing the domain of organising conferences. Additionally, 'complex correspondences' are also very welcome. Alignments will be evaluated automatically against reference alignments also considering its uncertain version presented at ISWC 2014. Summary results along with detail performance results for each ontology pair (test case) and comparison with tools' performance from last years will be provided.
Multifarm
This dataset is composed of a subset of the Conference dataset, translated in nine different languages (Arabic, Chinese, Czech, Dutch, French, German, Portuguese, Russian, and Spanish) and the corresponding alignments between these ontologies. Based on these test cases, it is possible to evaluate and compare the performance of matching approaches with a special focus on multilingualism.
Complex
This track evaluates the detection of complex correspondences between ontologies of four different domains: conference, hydrography, geography and species taxonomy. Each dataset has its particularities and evaluation modalities.
Interactive matching evaluation (interactive)
This track offers the possibility to compare different interactive matching tools which require user interaction. The goal is to show if user interaction can improve the matching results, which methods are most promising and how many interactions are necessary. All participating systems are evaluated using an oracle which bases on the reference alignment. Using the SEALS client, the matching system only needs to be slightly adapted to participate to this track.
Large Biomedical Ontologies (largebio)
This track consists of finding alignments between the Foundational Model of Anatomy (FMA), SNOMED CT, and the National Cancer Institute Thesaurus (NCI). These ontologies are semantically rich and contain tens of thousands of classes. UMLS Metathesaurus has been selected as the basis for the track reference alignments.
Disease and Phenotype (phenotype)
The Pistoia Alliance Ontologies Mapping project team organises and sponsors this track based on a real use case where it is required to find alignments between disease and phenotype ontologies. Specifically, the selected ontologies are the Human Phenotype (HP) Ontology, the Mammalian Phenotype (MP) Ontology, the Human Disease Ontology (DOID), and the Orphanet and Rare Diseases Ontology (ORDO).
Biodiversity and Ecology (biodiv)
The goal of the track is to find pairwise alignments between the Environment Ontology (ENVO) and the Semantic Web for Earth and Environment Technology Ontology (SWEET), and between the Plant Trait Ontology (PTO) and the Flora Phenotype Ontology (FLOPO). These ontologies are particularly useful for biodiversity and ecology research and are being used in various projects. They have been developed in parallel and are very overlapping. They are semantically rich and contain tens of thousands of classes.

Instance matching or link discovery

SPIMBENCH (spimbench)
The goal of this track is to determine when two OWL instances describe the same Creative Work. The datasets are generated and transformed using SPIMBENCH by altering a set of original data through value-based, structure-based, and semantics-aware transformations (simple combination of transformations).
Link Discovery (link)
In this track two benchmark generators are proposed to deal with link discovery for spatial data where spatial data are represented as trajectories (i.e., sequences of longitude, latitude pairs).
IIMB (IIMB)
IIMB is an OWL-based dataset that is automatically generated by introducing a set of controlled transformations in an initial OWL Abox, in order i) to provide an evaluation dataset for various kinds of data transformations, including value transformations, structural transformations, and logical transformations, and ii) to cover a wide spectrum of possible techniques and tools.

Instance and schema matching

Knowledge graph
The Knowledge Graph Track contains nine isolated knowledge graphs with instance and schema data. The goal of the task is to match both the instances and the schema.

Evaluation

Preparation phase

OAEI track organisers can choose to use SEALS and/or HOBBIT for their tracks. New organisers are encouraged to (try to) use the HOBBIT platform.

All public datasets should be available by the end of this phase.

Execution phase

OAEI participants should follow the SEALS instructions and/or the HOBBIT instructions depending on the tracks they are willing to participate.

We encourage systems developers to test their systems with HOBBIT/SEALS in the early stages of this phase to avoid last minute problems with the evaluation infrastructure. Once the execution phase ends, there will be limited time to solve technical problems with the evaluation platforms.

Evaluation phase

Evaluation will be run under both the SEALS and HOBBIT infrastructure.

Participants will be evaluated with respect to all of the OAEI tracks (when possible) even though the system might be specialized for some specific kind of matching problems. We know that this can be a problem for some systems that have specifically been developed for, e.g., matching biomedical ontologies; but this point can still be emphasized in the specific results paper about the system in case the results generated for some specific track are not good at all.

The results will be reported at the International Workshop on Ontology Matching, which will be collocated with the 17th International Semantic Web Conference (ISWC 2018).

Visual support for the evaluation (optional use): AlignmentCubes is an interactive visual environment which provides comparative exploration and evaluation of multiple ontology alignments at different level of detail. AlignmenCubes can support (a) developers during the process of developing and debugging alignment algorithms, (b) evaluators to make observations at different level of detail, and (c) data integrators to select and configure their tools as well as to develop and debug alignments. More information can be found here.

OAEI rules

Please note that, a matcher may want to behave differently given what it is provided with as ontologies; however, this should not be based on features specific of the tracks (e.g., there is a specific string in the URL, or a specific class name) but on features of the ontologies (e.g., there are no instances or labels are in German). Check the OAEI rules here.

Systems that rely or are derived from other ontology matching systems should: (a) clearly state the system they rely on, and (b) what was changed from / added to the original system.

Withdrawal of systems is possible up to one week after submission. After this period you accept that your systems will be evaluated and the results will be made publicly available within the OAEI pages and the OAEI evaluation report in accordance to the OAEI data policy.

Schedule

June 15th
preliminary datasets available.
July 15th
preparation phase ends and final datasets are available.
July 31st
participants register their tool (mandatory). Please use this form (requires a google account and a valid email)
August 31st September 9th
execution phase ends and participants submit final versions of their tools. SEALS tracks (zip file, e.g., LogMap.zip) using this form. HOBBIT tracks (via platform).
September 30th
evaluation phase ends and results are available. SEALS and HOBBIT tracks.
October 3rd
Preliminary version of system papers due. Submit PDF paper (e.g., LogMap_prelim.pdf). Please use this form (requires a google account and a valid email).
October 8th
Ontology matching workshop.
November 1st
Final version of system papers due. Submit a PDF (e.g., LogMap_final.pdf) paper. Please use this form (requires a google account and a valid email).

Presentation

From the results of the experiments, participants are expected to provide the organisers with a paper to be published in the proceedings of the Ontology Matching workshop. The paper should be no more than 8 pages long and formatted using the LNCS Style. Long-running systems can submit a 2 pages summary if there were not significant additions to the system. Please use this form for the submission (requires a google account and a valid email)

These papers are not peer-reviewed, but they will revised by 1-2 OAEI organisers. The main objective of these OAEI paper is to keep track of the participants and the description of matchers which took part in the campaign.

To ensure easy comparability among the participants it desire that the paper follows this outline:

  1. Presentation of the system
    1. State, purpose, general statement
    2. Specific techniques used
    3. Adaptations made for the evaluation
    4. Link to the system and parameters file
    5. Link to the set of provided alignments (in align format)
  2. Results
    • 2.x) a comment for each dataset performed
  3. General comments
    (not necessaryly by putting the section below but preferably in this order).
    1. Comments on the results (strength and weaknesses)
    2. Discussions on the way to improve the proposed system
    3. Comments on the OAEI procedure (including comments on the SEALS evaluation, if relevant)
    4. Comments on the OAEI test cases
    5. Comments on the OAEI measures
    6. Proposed new measures
  4. Conclusions
  5. References

The results from both selected participants and organizers were presented at the International Workshop on Ontology Matching collocated with ISWC 2018 taking place at Monterey (CA US) in October 2018.