Ontology Alignment Evaluation Initiative - OAEI-2018 CampaignOAEI OAEI

Complex track - Evaluation

General description

The complex track aims at evaluating the systems which can generate such correspondences.

This track contains 4 datasets about 4 different domains: Conference, Hydrography, GeoLink and Taxon.

The detailed description of each dataset can be found at the OAEI Complex track page

Table below presents the results for the 4 datasets. Only AMLC and CANARD were able to generate complex correspondences. The results for the other systems are reported in terms of simple alignments.

Matcher Conference Hydrography (subtask 1) GeoLink (subtask 1) Taxon
Precision F-measure Recall Precision(Avg.) F-measure (Avg.) Recall (Avg.) Precision F-measure Recall Precision (Avg.) QWR
ABC - - - 0.43 0.18 0.12 - - - - -
ALOD2Vec - - - 0.5 0.09 0.05 0.78 0.19 0.11 -
AMLC 0.54 0.42 0.34 - - - - - - - -
AML - - - - - - - - - 0.00 0.00
CANARD - - - - - - - - - 0.20 0.13
DOME - - - 0.35 0.09 0.06 0.44 0.17 0.11 - -
FMapX - - - 0.46 0.11 0.07 - - - - -
Holontology - - - - - - - - - 0.22 0.00
KEPLER - - - 0.5 0.09 0.05 - - - - -
LogMap - - - 0.44 0.08 0.05 0.85 0.18 0.1 0.54 0.07
LogMapBio - - - - - - - - - 0.28 0.00
LogMapKG - - - - - - 0.85 0.18 0.1 - -
LogMapLt - - - - - - 0.73 0.19 0.11 0.16 0.10
POMAP++ - - - 0.42 0.06 0.04 0.9 0.17 0.09 0.14 0.00
XMap - - - 0.21 0.09 0.06 0.39 0.15 0.09 - -

Results per dataset

Conference dataset

The complex correspondences output by the systems were manually compared to the ones of the provided consensus alignment.

For this first evaluation, only equivalence correspondences were considered and the confidence of the correspondenes were not be taken into account.

The detailed results for this track are accessible in Conference results

Hydrography dataset

This dataset is composed of three subtasks: Entity Identification, Relationship Identification and Full complex alignment Identification. The alignments generated for each task have been evaluated using semantic precision and recall.

The detailed results for this track are accessible in Hydrography results

GeoLink dataset

The evaluation of the systems were performed by computing precision and recall for three tasks.

The detailed results for this track are accessible in GeoLink results

Taxon dataset

The evaluation of this dataset is task-oriented. We evaluate the generated correspondences using a SPARQL query rewriting system and manually measure their ability of answering a set of queries over each dataset. The alignments have to be in EDOAL. The systems have been evaluated on a subset of the dataset (common scope). The evaluation was blind.

The detailed results for this track are accessible in Taxon results

Organizers

References

[1] Ondřej Zamazal, Vojtěch Svátek. The Ten-Year OntoFarm and its Fertilization within the Onto-Sphere. Web Semantics: Science, Services and Agents on the World Wide Web, 43, 46-53. 2017.

[2] Élodie Thiéblin, Ollivier Haemmerlé, Nathalie Hernandez, Cassia Trojahn. Task-Oriented Complex Ontology Alignment: Two Alignment Evaluation Sets. In : European Semantic Web Conference. Springer, Cham, 655-670, 2018.

[3] Élodie Thiéblin, Fabien Amarger, Nathalie Hernandez, Catherine Roussey, Cassia Trojahn. Cross-querying LOD datasets using complex alignments: an application to agronomic taxa. In: Research Conference on Metadata and Semantics Research. Springer, Cham, 25-37, 2017.

[4] Lu Zhou, Michelle Cheatham, Adila Krisnadhi, Pascal Hitzler. A Complex Alignment Benchamark: GeoLink Dataset. In: International Semantic Web Conference. Springer, 2018.

[5] Jerome Euzenat: Semantic Precision and Recall for Ontology Alignment Evaluation. International Joint Conference on Artificial Intelligence 2007, 348-353.