In our preliminary evaluation, only 4 systems (LogMap, LogMapLt, LogMapKG and Matcha) managed to generate an output for at least one of the track tasks.
We conducted experiments using the MELT client. We executed each system in its standard settings and we calculated precision, recall and f-measure. The execution times are calculated considering the whole process pipeline, starting from ontologies upload and environment preparation.
We have run the evaluation on: a Windows 10 (64-bit) desktop with an Intel Core i7-4770 CPU @ 3.40GHz x 4, allocating 16GB of RAM.
1. Results for the ENVO-SWEET matching task
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
LogMap | 00:00:21 | 683 | 0.776 | 0.659 | 0.713 |
LogMapKG | 00:00:24 | 683 | 0.775 | 0.658 | 0.711 |
LogMapLt | 00:04:47 | 595 | 0.803 | 0.595 | 0.683 |
2. Results for the MACROALGAE-MACROZOOBENTHOS matching task
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
LogMapLt | 00:00:00 | 9 | 0.667 | 0.333 | 0.444 |
LogMap | 00:00:02 | 29 | 0.276 | 0.444 | 0.340 |
LogMapKG | 00:00:03 | 29 | 0.276 | 0.444 | 0.340 |
Matcha | 00:00:05 | 45 | 0.200 | 0.500 | 0.286 |
3. Results for the FISH-ZOOPLANKTON matching task
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
LogMapLt | 00:00:00 | 10 | 0.800 | 0.533 | 0.640 |
Matcha | 00:00:08 | 47 | 0.277 | 0.867 | 0.419 |
LogMapKG | 00:00:03 | 55 | 0.218 | 0.800 | 0.343 |
LogMap | 00:00:02 | 32 | 0.094 | 0.200 | 0.128 |
4. Results for the NCBITAXON-TAXREFLD matching task
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
Matcha | 00:01:18 | 71008 | 0.675 | 0.994 | 0.804 |
LogMapLt | 13:56:54 | 72010 | 0.665 | 0.993 | 0.797 |
LogMap | 00:06:31 | 72899 | 0.661 | 0.999 | 0.795 |
LogMapKG | 00:06:13 | 72898 | 0.661 | 0.999 | 0.795 |
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
LogMapLt | 00:00:00 | 290 | 0.6 | 0.994 | 0.748 |
Matcha | 00:00:04 | 303 | 0.578 | 1.0 | 0.732 |
LogMap | 00:00:00 | 304 | 0.576 | 1.0 | 0.731 |
LogMapKG | 00:00:00 | 304 | 0.576 | 1 | 0.731 |
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
LogMapLt | 00:00:00 | 2165 | 0.637 | 0.982 | 0.773 |
LogMap | 00:00:02 | 2218 | 0.624 | 0.985 | 0.764 |
LogMapKG | 00:00:02 | 2218 | 0.624 | 0.985 | 0.764 |
Matcha | 00:00:14 | 2219 | 0.623 | 0.984 | 0.763 |
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
Matcha | 00:00:36 | 12936 | 0.785 | 0.998 | 0.879 |
LogMap | 00:00:25 | 12949 | 0.784 | 0.998 | 0.878 |
LogMapKG | 00:00:24 | 12949 | 0.784 | 0.998 | 0.878 |
LogMapLt | 00:00:03 | 12929 | 0.784 | 0.997 | 0.878 |
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
LogMapLt | 00:00:06 | 26359 | 0.746 | 0.987 | 0.850 |
Matcha | 00:01:01 | 26675 | 0.741 | 0.993 | 0.849 |
LogMap | 00:01:00 | 26912 | 0.731 | 0.988 | 0.841 |
LogMapKG | 00:00:55 | 26910 | 0.732 | 0.988 | 0.841 |
System | Time (HH:MM:SS) | # Mappings | Scores | ||
Precision | Recall | F-measure | |||
LogMapLt | 00:00:00 | 477 | 0.746 | 0.997 | 0.854 |
Matcha | 00:00:11 | 494 | 0.723 | 1.0 | 0.839 |
LogMap | 00:00:00 | 496 | 0.720 | 1.0 | 0.837 |
LogMapKG | 00:00:01 | 496 | 0.720 | 1.0 | 0.837 |
This evaluation has been run by Naouel Karam and Alsayed Algergawy. If you have any problems working with the ontologies, any questions related to tool wrapping, or any suggestions related to the Biodiv track, feel free to write an email to: karam [at] infai [.] org