Ontology Alignment Evaluation Initiative - OAEI-2020 Campaign

Disease and Phenotype Track

Results OAEI 2020::Disease and Phenotype Track

Contact

If you have any question/suggestion related to the results of this track or if you notice any kind of error (wrong numbers, incorrect information on a matching system, etc.), feel free to write an email to ernesto [.] jimenez [.] ruiz [at] gmail [.] com or ianharrowconsulting [at] gmail [dot] com

Evaluation setting

We have run the evaluation in a Ubuntu 18 Laptop with an Intel Core i5-6300HQ CPU @ 2.30GHz × 4 and allocating 15Gb of RAM.

Systems have been evaluated according to the following criteria:

We have used the OWL 2 EL reasoner ELK to compute an approximate number of unsatisfiable classes.

Check out the supporting scripts to reproduce the evaluation: https://github.com/ernestojimenezruiz/oaei-evaluation

Participation and success

In the OAEI 2020 phenotype track 7 participating OAEI 2020 systems have been able to complete at least one of the tasks with a 8 hours timeout (see Table 1). DESKMatcher finished with an "OutOfMemoryException" in both tasks.

System HP-MP DOID-ORDO Average # Tasks
LogMapLt 7 8 8 2
ATBox 16 21 19 2
LogMap 32 25 29 2
AML 102 200 151 2
Wiktionary 854 858 856 2
LogMapBio 1,355 2,034 1,695 2
ALOD2Vec 2,384 2,809 2,597 2
# Systems 7 7 765 14
Table 1: System runtimes (s) and task completion.

Use of background knowledge

LogMapBio uses BioPortal as mediating ontology provider, that is, it retrieves from BioPortal the most suitable top-10 ontologies for the matching task.

LogMap uses normalisations and spelling variants from the general (biomedical) purpose SPECIALIST Lexicon.

AML has three sources of background knowledge which can be used as mediators between the input ontologies: the Uber Anatomy Ontology (Uberon), the Human Disease Ontology (DOID) and the Medical Subject Headings (MeSH).

 

Results against the consensus alignments with vote 3

Tables 2 and 3 show the results achieved by each of the participating systems against the consensus alignment with vote=3. Note that systems participating with different variants only contributed once in the voting, that is, the voting was done by family of systems/variants rather than by individual systems.

Since the consensus alignments only allow us to assess how systems perform in comparison with one another the proposed ranking is only a reference. Note that, on one hand, some of the mappings in the consensus alignment may be erroneous (false positives), as all it takes for that is that 3 systems agree on part of the erroneous mappings they find. On the other hand, the consensus alignments are not complete, as there will likely be correct mappings that no system is able to find, and there are a number of mappings found by only one system (and therefore not in the consensus alignments) which may be correct.

Nevertheless, the results with respect to the consensus alignments do provide some insights into the performance of the systems. For example, LogMap is the system that provides the closest set of mappings to the consensus with vote=3 (not necessarily the best system), while AML outputs a large set of unique mappings, that is, mappings that are not proposed by any other system. LogMap has a small set of unique mappings as most of its mappings are also suggested by its variant LogMapBio and viceversa.

HP-MP task

System Time (s) # Mappings # Unique Scores Incoherence Analysis
Precision  Recall  F-measure Unsat. Degree
LogMap 1,355 2,128 9 0.903 0.768 0.830 ≥0 ≥0%
LogMapBio 1,355 2,198 62 0.884 0.777 0.827 ≥0 ≥0%
AML 102 2,029 358 0.910 0.739 0.816 ≥0 ≥0%
LogMapLt 7 1,370 0 0.997 0.546 0.706 ≥0 ≥0%
ATBox 16 759 10 0.982 0.298 0.457 ≥0 ≥0%
ALOD2Vec 2,384 67,943 469 0.024 0.641 0.046 ≥0 ≥0%
Wiktionary 854 67,455 4 0.023 0.625 0.044 ≥0 ≥0%
Table 2: Results for the HP-MP.

DOID-ORDO task

System Time (s) # Mappings # Unique Scores Incoherence Analysis
Precision  Recall  F-measure Unsat. Degree
LogMapBio 2,034 2,584 147 0.945 0.625 0.752 ≥0 ≥0%
AML 200 4,781 195 0.682 0.834 0.750 ≥0 ≥0%
LogMap 25 2,330 0 0.985 0.587 0.736 ≥0 ≥0%
Wiktionary 858 7,336 5 0.479 0.899 0.625 ≥3,288 ≥24.1%
LogMapLt 8 1,747 10 0.993 0.444 0.614 ≥0 ≥0%
ALOD2Vec 2,809 7,805 457 0.454 0.907 0.605 ≥12,787 ≥93.6%
ATBox 21 1,318 17 0.986 0.333 0.498 ≥0 ≥0%
Table 3: Results for the DOID-ORDO task.