Ontology Alignment Evaluation Initiative - OAEI-2019 Campaign

Disease and Phenotype Track

Results OAEI 2019::Disease and Phenotype Track

Contact

If you have any question/suggestion related to the results of this track or if you notice any kind of error (wrong numbers, incorrect information on a matching system, etc.), feel free to write an email to ernesto [.] jimenez [.] ruiz [at] gmail [.] com or ianharrowconsulting [at] gmail [dot] com

Evaluation setting

We have run the evaluation in a Ubuntu 18 Laptop with an Intel Core i5-6300HQ CPU @ 2.30GHz × 4 and allocating 15Gb of RAM.

Systems have been evaluated according to the following criteria:

We have used the OWL 2 reasoner HermiT to compute the number of unsatisfiable classes.

Check out the supporting scripts to reproduce the evaluation: https://github.com/ernestojimenezruiz/oaei-evaluation

Participation and success

In the OAEI 2019 phenotype track 8 participating OAEI 2019 systems have been able to complete at least one of the tasks with a 6 hours timeout (see Table 1).

System HP-MP DOID-ORDO Average # Tasks
LogMapLt 6 8 7 2
DOME 11 17 14 2
FCAMapKG 14 23 19 2
LogMap 43 24 34 2
AML 90 173 132 2
Wiktionary 745 531 638 2
LogMapBio 1,740 2,312 2,026 2
POMAP++ 1,862 2,497 2,180 2
# Systems 8 8 631 16
Table 1: System runtimes (s) and task completion.

Use of background knowledge

LogMapBio uses BioPortal as mediating ontology provider, that is, it retrieves from BioPortal the most suitable top-10 ontologies for the matching task.

LogMap uses normalisations and spelling variants from the general (biomedical) purpose SPECIALIST Lexicon.

AML has three sources of background knowledge which can be used as mediators between the input ontologies: the Uber Anatomy Ontology (Uberon), the Human Disease Ontology (DOID) and the Medical Subject Headings (MeSH).

 

Results against the consensus alignments with vote 3

Tables 2 and 3 show the results achieved by each of the participating systems against the consensus alignment with vote=3. Note that systems participating with different variants only contributed once in the voting, that is, the voting was done by family of systems/variants rather than by individual systems.

Since the consensus alignments only allow us to assess how systems perform in comparison with one another the proposed ranking is only a reference. Note that, on one hand, some of the mappings in the consensus alignment may be erroneous (false positives), as all it takes for that is that 3 systems agree on part of the erroneous mappings they find. On the other hand, the consensus alignments are not complete, as there will likely be correct mappings that no system is able to find, and there are a number of mappings found by only one system (and therefore not in the consensus alignments) which may be correct.

Nevertheless, the results with respect to the consensus alignments do provide some insights into the performance of the systems. For example, LogMap is the system that provides the closest set of mappings to the consensus with vote=3 (not necessarily the best system), while AML outputs a large set of unique mappings, that is, mappings that are not proposed by any other system. LogMap has a small set of unique mappings as most of its mappings are also suggested by its variant LogMapBio and viceversa.

HP-MP task

System Time (s) # Mappings # Unique Scores Incoherence Analysis
Precision  Recall  F-measure Unsat. Degree
LogMap 43 2,130 1 0.879 0.824 0.851 0 0%
LogMapBio 1,740 2,201 50 0.861 0.833 0.847 0 0%
AML 90 2,029 330 0.891 0.795 0.840 0 0%
LogMapLt 6 1,370 2 0.996 0.600 0.749 0 0%
POMAP++ 1,862 1,502 218 0.857 0.566 0.682 0 0%
FCAMapKG 14 734 0 0.997 0.322 0.487 0 0%
DOME 11 692 0 0.997 0.303 0.465 0 0%
Wiktionary 745 61,872 60,634 0.020 0.549 0.039 0 0%
Table 2: Results for the HP-MP.

DOID-ORDO task

System Time (s) # Mappings # Unique Scores Incoherence Analysis
Precision  Recall  F-measure Unsat. Degree
LogMapBio 2,312 2,547 123 0.911 0.807 0.856 0 0%
LogMap 24 2,323 0 0.947 0.765 0.846 0 0%
POMAP++ 2,497 2,563 192 0.887 0.790 0.836 0 0%
LogMapLt 8 1,747 20 0.989 0.601 0.748 0 0%
AML 173 4,781 2,342 0.521 0.866 0.651 0 0%
FCAMapKG 23 1,274 2 0.999 0.443 0.614 0 0%
DOME 17 1,235 5 0.993 0.426 0.596 0 0%
Wiktionary 531 909 366 0.573 0.181 0.275 7 0.067%
Table 3: Results for the DOID-ORDO task.


Related publications

Paper describing the experiences and results in the OAEI 2016 Disease and Phenotype track.