Ontology Alignment Evaluation Initiative - OAEI-2018 CampaignOAEI OAEI

Complex track - Hydrography - Evaluation

Evaluation Tasks

There are three subtasks related to this evaluation:

  1. Entity Identification: For each entity in the source ontology, list all ofthe entities that are related in some way in the target ontology.
  2. Relationship Identification: Given a dictionary containing entitiesfrom the source ontology paired with all related entities, determine theexpression that specifies the nature of the relation.
  3. Full Complex Alignment Identification: A combination of the two former step to determine the complex alignment that exist between thesource and target ontology.

Subtask 1 is evaluated based on standard precision, recall and F-measure, while subtasks 2 and 3 are evaluated using semantic precision, recall and F-measure, as defined in [1].

Results

None of the alignment systems entered in this year's OAEI are capable of producing results for subtasks 2 and 3, so the table below shows the results of the systems that produced results on any of the ontology pairs within the test set as evaluated based on subtask 1. The line for ABC (Alignment By Comments) is a baseline matcher that simply considers an entity in the target ontology (TE) to be related to a source ontology entity (SE) if either the label or comment of SE mentions the label of TE. A "-" indicates that the system did not produce any results or did not run to completion for that particular ontology pair.

Matcher Precision Recall F-measure
Hydro3-SWO
ABC 0.478 0.289 0.360
ALOD2Vec 1.000 0.132 0.233
DOME 1.000 0.132 0.233
FMapX 0.833 0.237 0.369
KEPLER 1.000 0.158 0.273
LogMap 1.000 0.158 0.273
POMAP++ 1.000 0.132 0.233
XMap 0.833 0.237 0.369
HydrOntology(translated)-SWO
ABC 0.176 0.057 0.090
ALOD2Vec - - -
DOME 0.263 0.024 0.044
FMapX - - -
KEPLER - - -
LogMap 0.778 0.033 0.063
POMAP++ 0.667 0.012 0.024
XMap - - -
HydrOntology(native)-SWO
ABC 0.250 0.005 0.010
ALOD2Vec - - -
DOME 0.000 0.000 0.000
FMapX - - -
KEPLER - - -
LogMap - - -
POMAP++ - - -
XMap - - -
Cree-SWO
ABC 0.833 0.139 0.240
ALOD2Vec 1.000 0.069 0.129
DOME 0.119 0.069 0.087
FMapX 1.000 0.042 0.081
KEPLER 1.000 0.042 0.081
LogMap - - -
POMAP++ - - -
XMap - - -

Disscussion

The Hydro3-SWO ontology pair is the closest to existing OAEI benchmarks -- the majority of relationships between these two ontologies are 1-to-1 equivalences, so it is not surprising that the performance is best on this pair. The low performance on the native (Spanish) version of the HydrOntology-SWO pair highlights the lack of multilingual support in many existing matching systems. The Cree ontology also contains many non-English labels, but the comments are in English. It seems that most existing systems do not currently take advantage of this information, leading the baseline matcher to outperform the others in terms of F-measure. The recall of the baseline system is in general higher than the other matchers because many existing matchers filter their match candidates to produce 1-to-1 alignments whereas the alignments in this track are generally 1:n or m:n.

[1] Euzenat, Jérôme. "Semantic Precision and Recall for Ontology Alignment Evaluation." IJCAI. Vol. 7. 2007.