There are three subtasks related to this evaluation:
All three subtasks are evaluated based on standard precision, recall, and F-measure.
There are around 16 ontology alignment systems who participate in this year's OAEI. Unfortunately, none of the alignment systems are capable of producing results for subtasks 2 and 3 on GeoLink benchmark. The following table summarizes the evaluation results of the matchers that can produce on subtask 1.
Matcher | Precision | Recall | F-measure |
---|---|---|---|
ALOD2Vec | 0.78 | 0.11 | 0.19 |
DOME | 0.44 | 0.11 | 0.17 |
LogMap | 0.85 | 0.1 | 0.18 |
LogMapKG | 0.85 | 0.1 | 0.18 |
LogMapLt | 0.73 | 0.11 | 0.19 |
POMAP++ | 0.90 | 0.09 | 0.17 |
XMap | 0.39 | 0.09 | 0.15 |
There are 7 systems that can produce the alignment on the subtask 1 of GeoLink dataset. Among these alignments, all correspondences between GeoLink Base Ontology (GBO) and GeoLink Modular Ontology (GMO) are 1-to-1 equivalence. The precision of most of the systems are still relatively high, which means that the traditional ontology alignment systems can handle the simple relations in real-world ontologies as well, since the GeoLink benchmark is created based on a real-world case. But, it is not surprising that the low recall reflects that the current ontology alignment systems are not capable of identifying more complex relations, which we hope that it will be improved in the next future years.