Ontology Alignment Evaluation Initiative - OAEI 2015 CampaignOAEI
Results available here

Ontology Alignment Evaluation Initiative

2015 Campaign

Since 2004, OAEI organises evaluation campaigns aiming at evaluating ontology matching technologies.

Problems

The OAEI 2015 campaign will once again confront ontology matchers to ontology and data sources to be matched. This year, the following test sets are available:

benchmark
Like in previous campaigns, a systematic benchmark series has to be matched. The goal of this benchmark series is to identify the areas in which each alignment algorithm is strong and weak. The test is not anymore based on the very same dataset that has been used from 2004 to 2010. We are now able to generate undisclosed tests with the same structure. They provide strongly comparable results and allow for testing scalability.
anatomy
The anatomy real world case is about matching the Adult Mouse Anatomy (2744 classes) and the NCI Thesaurus (3304 classes) describing the human anatomy.
conference
The goal of the track is to find alignments within a collection of ontologies describing the domain of organising conferences. Additionally, 'complex correspondences' are also very welcome. Alignments will be evaluated automatically against reference alignments also considering its uncertain version presented at ISWC 2014. Summary results along with detail performance results for each ontology pair (test case) and comparison with tools' performance from last years will be provided.
Multifarm
This dataset is composed of a subset of the Conference dataset, translated in nine different languages (Arabic, Chinese, Czech, Dutch, French, German, Portuguese, Russian, and Spanish) and the corresponding alignments between these ontologies. Based on these test cases, it is possible to evaluate and compare the performance of matching approaches with a special focus on multilingualism.
Interactive matching evaluation (interactive)
This track offers the possibility to compare different interactive matching tools which require user interaction. The goal is to show if user interaction can improve the matching results, which methods are most promising and how many interactions are necessary. All participating systems are evaluated using an oracle which bases on the reference alignment. Using the SEALS client, the matching system only needs to be slightly adapted to participate to this track.
Large Biomedical Ontologies (largebio)
This track consists of finding alignments between the Foundational Model of Anatomy (FMA), SNOMED CT, and the National Cancer Institute Thesaurus (NCI). These ontologies are semantically rich and contain tens of thousands of classes. UMLS Metathesaurus has been selected as the basis for the track reference alignments.
Instance Matching (im)
The Instance Matching Track aims at evaluating the performance of matching tools when the goal is to detect the degree of similarity between pairs of items/instances expressed in the form of OWL Aboxes. The track is organized in five independent tasks. To participate to the Instance Matching Track, submit results related to one, more, or even all the expected tasks. Each task is articulated in two tests with different scales (i.e., number of instances to match): i) Sandbox (small scale). It contains two datasets called source and target as well as the set of expected mappings (i.e., reference alignment). ii) Mainbox (medium scale). It contains two datasets called source and target. This test is blind, meaning that the reference alignment is not given to the participants. In both tests, the goal is to discover the matching pairs (i.e., mappings) among the instances in the source dataset and the instances in the target dataset.

Ontology Alignment for Query Answering (oa4qa)
This track will not follow the classical ontology alignment evaluation with respect to a set of reference alignments. Precision and Recall will be calculated with respect to the ability of the generated alignments to answer a set of queries in a ontology-based data access scenario where several ontologies exist. In the OAEI 2015 campaign the datasets will be based on the Conference track.

Modalities

OAEI 2015 will continue the procedure of running on the SEALS platform introduced in 2011. The results will be reported at the Ontology matching workshop of the 13th International Semantic Web Conference (ISWC 2015).

The overall process of participation including how to accomplish tool bundling is described here.

A tool that participates in one of the tracks conducted with the SEALS modality, will be evaluated with respect to all of the other tracks under the SEALS modality even though the tool might be specialized for some specific kind of matching problems. We know that this can be a problem for some systems that have specifically been developed for, e.g., matching biomedical ontologies; but this point can still be emphasized in the specific results paper about the system in case the results generated for some specific track are not good at all.

Please note that, a matcher may want to behave differently given what it is provided with as ontologies; however, this should not be based on features specific of the tracks (e.g., there is a specific string in the URL, or a specific class name) but on features of the ontologies (e.g., there are no instances or labels are in German).

SEALS evaluation process

Following the successful campaigns since 2011, most of the tests will be evaluated under the SEALS platform. The evaluation process is detailed here, and in general it follows the same pattern as in past years:

  1. Participants wrap their tools as a SEALS platform package and register them to the SEALS portal (due to some technical reasons the seals-project.eu portal is not yet available);
  2. Participants can test their tools with the SEALS client on the data-sets provided with reference alignments by each track organizer. The ids of those data-sets are given in each track web page;
  3. Organizers run the evaluation on the SEALS platform from the tools registered in the platform and with both blind and published datasets;
  4. For some tracks, results are (automatically) available on the SEALS portal.

Schedule

July 10th
datasets available for presceening (new deadline!).
July 31st
datasets are frozen (new deadline!).
August 31st
participants send final versions of their tools.
September 28th
evaluation is executed and results are analyzed.
October 5th
Preliminary version of system papers due.
October 12th
Ontology matching workshop.
November 9th 16th
Final version of system papers due (sharp).

Presentation

From the results of the experiments, participants are expected to provide the organisers with a paper to be published in the proceedings of the Ontology matching workshop. The paper must be no more than 8 pages long and formatted using the LNCS Style. To ensure easy comparability among the participants it has to follow the given outline. The above mentioned paper must be sent in PDF format by October 5th to Jerome . Euzenat (a) inria . fr with copy to pavel (a) dit . unitn . it and to ernesto . jimenez . ruiz (a) gmail . com

Participants may also submit a longer version of their paper, with a length justified by its technical content, to be published online in the CEUR-WS collection and on the OAEI web site (this last paper will be due just before the workshop).

The outline of the paper is as below (see templates for more details):

  1. Presentation of the system
    1. State, purpose, general statement
    2. Specific techniques used
    3. Adaptations made for the evaluation
    4. Link to the system and parameters file
    5. Link to the set of provided alignments (in align format)
  2. Results
  3. General comments
    (not necessaryly by putting the section below but preferably in this order).
    1. Comments on the results (strength and weaknesses)
    2. Discussions on the way to improve the proposed system
    3. Comments on the OAEI procedure (including comments on the SEALS evaluation, if relevant)
    4. Comments on the OAEI test cases
    5. Comments on the OAEI measures
    6. Proposed new measures
  4. Conclusions
  5. References

These papers are not peer-reviewed and are here to keep track of the participants and the description of matchers which took part in the campaign.

The results from both selected participants and organizers were presented at the International Workshop on Ontology Matching collocated with ISWC 2015 taking place at Bethlehem (PA US) in October 22nd or 23rd, 2015.