The complex track aims at evaluating the systems which can generate both simple and complex correspondences.
This track contains 5 datasets from 4 different domains: Conference, Populated Conference, Hydrography, GeoLink and Taxon.
The detailed description of each dataset can be found at the OAEI Complex track page
Table below presents the results for the 4 datasets. Only AMLC, AROA, and CANARD were able to generate complex correspondences. The results for the other systems are reported in terms of simple alignments.
(classical - not disjoint)
(classical - query Fmeas.)
|relaxed Precision||relaxed F-measure||relaxed Recall||relaxed Precision||relaxed F-measure||relaxed Recall||Precision
(classical - overlap)
(classical - overlap)
|AGM||-||-||-||-||-||-||-||-||-||-||-||0.06 - 0.14||0.03 - 0.04|
|Alin||-||-||-||0.68 - 0.98||0.20 - 0.28||-||-||-||-||-||-||-||-|
|AML||-||-||-||0.59 - 0.93||0.31 - 0.37||-||-||-||-||-||-||0.53||0|
|AMLC||0.31||0.34||0.37||0.30 - 0.59||0.46 - 0.50||0.45||0.10||0.05||0.50||0.32||0.23||-||-|
|CANARD||-||-||-||0.21 - 0.88||0.40 - 0.51||-||-||-||0.89||0.54||0.39||0.08 - 0.91||0.14 - 0.36|
|DOME||-||-||-||0.59 - 0.94||0.23 - 0.31||-||-||-||-||-||-||-||-|
|FCAMap-KG||-||-||-||0.51 - 0.82||0.21 - 0.28||-||-||-||-||-||-||0.63 - 0.96||0.03 - 0.05|
|Lily||-||-||-||0.45 - 0.73||0.23 - 0.28||-||-||-||-||-||-||-||-|
|LogMap||-||-||-||0.56 - 0.96||0.25 - 0.32||0.67||0.10||0.05||0.85||0.29||0.18||0.63 - 0.79||0.11 - 0.14|
|LogMapBio||-||-||-||-||-||0.7||0.10||0.05||-||-||-||0.54 - 0.72||0.08 - 0.11|
|LogMapKG||-||-||-||0.56 - 0.96||0.25 - 0.32||0.67||0.10||0.05||0.85||0.29||0.18||0.55 - 0.69||0.14 - 0.17|
|LogMapLt||-||-||-||0.50 - 0.87||0.23 - 0.32||0.66||0.10||0.06||0.69||0.36||0.25||0.22 - 0.41||0.08 - 0.15|
|ONTMAT1||-||-||-||0.67 - 0.98||0.20 - 0.28||-||-||-||-||-||-||-||-|
|POMAP++||-||-||-||0.25 - 0.54||0.20 - 0.29||0.65||0.07||0.04||0.9||0.26||0.16||1||0|
|Wiktionary||-||-||-||0.48 - 0.88||0.26 - 0.34||-||-||-||-||-||-||-||-|
The complex correspondences generated by the systems were manually compared to the ones of the provided consensus alignment.
For this evaluation, only equivalence correspondences were considered and the confidence of the correspondence were not be taken into account.
The detailed results for this track are accessible in Conference results
In this subtrack, the alignments are automatically evaluated over a populated version of the Conference dataset.
The dataset as well as the evaluation systems are available at https://framagit.org/IRIT_UT2J/conference-dataset-population.
Two metrics are computed: a Coverage score and a Precision score.
The systems were run:
Then the best set of alignments output by each systems gives the final score for this track. For example, AML performed better on the Original Conference dataset than on the Populated version so the first sets of alignments are kept.
The detailed results for this track are accessible in Populated Conference results
In this subtack, in order to explain the performance of alignment systems, we break the evaluation down into three subtasks: Entity Identification, Relationship Identification, and Full Complex alignment Identification. The alignments generated for the final results have been evaluated using relaxed precision and recall.
The detailed results for this track are accessible in Hydrography results
The evaluation of GeoLink benchmark applies the same methods of evaluating Hydrography benchmark. The evaluation of the systems are performed by computing relaxed precision and recall for final results.
The detailed results for this track are accessible in GeoLink results
Even though the ontologies of the Taxon dataset have a common scope (plant taxonomy), they are unevenly populated. For this reason, the automatic evaluation system can not be applied to this dataset.
First the alignments have been filtered to remove the correspondences which align the same URIs and the correspondences which align instances. The filtered correspondences have then been manually classified as equivalent, more general, more specific, overlapping or disjoint.
6 reference SPARQL queries are used to compute the Coverage.
The detailed results for this track are accessible in Taxon results
 Ondřej Zamazal, Vojtěch Svátek. The Ten-Year OntoFarm and its Fertilization within the Onto-Sphere. Web Semantics: Science, Services and Agents on the World Wide Web, 43, 46-53. 2017.
 Élodie Thiéblin, Ollivier Haemmerlé, Nathalie Hernandez, Cassia Trojahn. Task-Oriented Complex Ontology Alignment: Two Alignment Evaluation Sets. In : European Semantic Web Conference. Springer, Cham, 655-670, 2019.
 Élodie Thiéblin, Fabien Amarger, Nathalie Hernandez, Catherine Roussey, Cassia Trojahn. Cross-querying LOD datasets using complex alignments: an application to agronomic taxa. In: Research Conference on Metadata and Semantics Research. Springer, Cham, 25-37, 2017.
 Lu Zhou, Michelle Cheatham, Adila Krisnadhi, Pascal Hitzler. A Complex Alignment Benchamark: GeoLink Dataset. In: International Semantic Web Conference. Springer, 2019.
 Marc Ehrig, and Jérôme Euzenat. "Relaxed precision and recall for ontology matching." K-CAP 2005 Workshop on Integrating Ontologies, Banff, Canada, 2005.