Ontology Alignment Evaluation Initiative

Evaluation of 'Conference track'

According to the nature of this track, we are mainly interested in some "interesting" mappings ("nuggets"). Although traditional evaulation was not our intention, we made some sort of evaluation as a side-effect of processing results from our six participants. All the statistics as well as precision and recall have been provisionally made by track organisers, who can often be subjective; the focus of the track is on interesting individual alignments and repeated patterns rather than on precision/recall figures.
So far, we have manually labelled 6898 mappings from participants. In order to make evaluation process more balanced, we transformed all results of participants into 91 alignments, except results of the SEMA tool. They (SEMA team) delivered 13 alignments – they mapped all ontologies to the EKAW ontology. Additionally, we took mappings from participants with higher measure than 0,7.



The abovementioned table encompasses several numerical statistics related to the results of six participants, called according to name of their systems (ASMOV, Falcon, Lily, OLA, OntoDNA and SEMA). Finally, there is also number of all unique mappings in the last row of the table. In the following, columns are explained: The following columns are dealing with measure of precision and recall: During manual evaluation we used the following 'categories', that you can see in the table below. These 'categories' are mainly needed for choosing candidates for 'Consensus building workshop':

Results of all participants

There were six participants in 'conference track' within OAEI-2007. You can download result of each participant (ie. alignments that participant submitted to 'conference track').