This year we have 8 track participants out of the 16 participating systems. The 8 systems (AML, DOME, FCAMapKG, LogMap, LogMapBio, LogMapLite, LogMapKG and POMap) managed to generate a meaningful output for at least one of the track tasks.
We have collected all generated alignments and made them available in a zip-file via the following link. These alignments are the raw results that the following report is based on.
We conducted experiments by executing each system in its standard settings and we calculated precision, recall and F-measure. Systems have been ordered in terms of F-measure.
We have run the evaluation in on Windows 10 (64-bit) desktop with an Intel Core i5-7500 CPU @ 3.40GHz x 4 and allocating 15.7Gb of RAM.
1. Results for the FLOPO-PTO matching task
System | Time (s) | # Mappings | # Unique | Scores | ||
Precision | Recall | F-measure | ||||
AML | 42 | 511 | 323 | 0.766 | 0.811 | 0.788 |
DOME | 8.22 | 141 | 1 | 0.993 | 0.588 | 0.739 |
FCAMapKG | 7.2 | 171 | 2 | 0.836 | 0.601 | 0.699 |
LogMap | 14.4 | 235 | 0 | 0.791 | 0.782 | 0.786 |
LogMapBio | 480.6 | 239 | 4 | 0.778 | 0.782 | 0.780 |
LogMapKG | 13.2 | 235 | 0 | 0.791 | 0.782 | 0.786 |
LogMapLite | 6.18 | 151 | 0 | 0.947 | 0.601 | 0.735 |
POMap | 331 | 261 | 61 | 0.651 | 0.714 | 0.681 |
2. Results for the ENVO-SWEET matching task
System | Time (s) | # Mappings | # Unique | Scores | ||
Precision | Recall | F-measure | ||||
AML | 3 | 925 | 200 | 0.733 | 0.899 | 0.808 |
FCAMapKG | 7.8 | 422 | 0 | 0.803 | 0.518 | 0.63 |
LogMap | 26.9 | 443 | 11 | 0.772 | 0.523 | 0.624 |
LogMapKG | 7.98 | 422 | 0 | 0.803 | 0.518 | 0.63 |
LogMapLite | 13.8 | 617 | 58 | 0.648 | 0.612 | 0.629 |
POMap | 223 | 673 | 86 | 0.684 | 0.703 | 0.693 |
This track is organized by Naouel Karam, Abderrahmane Khiat and Alsayed Algergawy. If you have any problems working with the ontologies, any questions related to tool wrapping, or any suggestions related to the Biodiv track, feel free to write an email to: naouel [.] karam [at] fokus [.] fraunhofer [.] de