Ontology Alignment Evaluation Initiative - OAEI-2018 CampaignOAEI OAEI

Results of Evaluation for the Biodiv track within OAEI 2018

Participants

This year we have 8 participants out of 16 participating systems. 7 systems (AML, LogMap, LogMapBio, LogMapLt, Lily, XMap and POMap) managed to generate a meaningful output. Except KEPLER which did not generate mappings in considered time.

Generated alignments

We have collected all generated alignments and made them available in a zip-file via the following link. These alignments are the raw results that the following report is based on.

>>> download raw results

Experimental setting

We conducted experiments by executing each system in its standard settings and we calculated precision, recall and F-measure. Systems have been ordered in terms of F-measure.

We have run the evaluation in on Windows 10 (64-bit) desktop with an Intel Core i5-7500 CPU @ 3.40GHz x 4 and allocating 15.7Gb of RAM.

Results

In the following, ...

1. Results for the FLOPO and PTO matching problem

System # Mappings Scores
Precision F-measure  Recall 
AML 233 0.88 0.86 0.84
Lily 176 0.813 0.681 0.586
LogMap 235 0.817 0.802 0.787
LogMapBio 239 0.803 0.795 0.787
LogMapLite 151 0.987 0.755 0.661
POMap 161 0.663 0.685 0.709
XMap 153 0.987 0.761 0.619
Table 1: Results for FLOPO-PTO

2. Results for the ENVO and SWEET matching problem

System # Mappings Scores
Precision F-measure  Recall 
AML 791 0.776 0.844 0.926
Lily 491 0.866 0.737 0.641
LogMap 583 0.839 0.785 0.738
LogMapBio 572 0.839 0.777 0.724
LogMapLite 740 0.732 0.772 0.817
POMap 583 0.839 0.785 0.738
XMap 547 0.868 0.785 0.716
Table 2: Results for ENVO-SWEET

Conclusions

Contact

This track is organized by Naouel Karam, Abderrahmane Khiat and Alsayed Algergawy. If you have any problems working with the ontologies, any questions related to tool wrapping, or any suggestions related to the Biodiv track, feel free to write an email to: naouel [.] karam [at] fokus [.] fraunhofer [.] de