Ontology Alignment Evaluation Initiative

2006 Results

Here are the official results of the Ontology Alignment Evaluation 2006. It is to be presented in Athens (GA US) at the ISWC 2006 Ontology matching workshop.

A synthesis paper in the proceedings of this workshop synthesizes the main results of the 2006 campaign. Here further data and update in the results are available. This page will be the official result of the evaluation.

The papers provided by the participants have been collected in the ISWC 2006 Ontology matching workshop proceedings (PDF) which are also published as CEUR Workshop Proceedings volume 225.

General summary

This year again, we had more participants than in previous years (4 in 2004, 7 in 2005 and 10 in 2006). We also noted the increase in tools compliance and robustness: they had less problems to carry the tests and we had less problems to evaluate the results.

We have had no time so far to validate the results which have been provided by the participants. Last year, validating these results has proved feasible so we plan to do it again before the final results (at least for those participants who provided their systems).

We summarize the list of participants in the table below. Similar to last year not all participants provided results for all tests. They usually did those which are easier to run (benchmark, directory and conference). The jobs line coincides with participants who have provided an executable version of their systems. The variety of tests and the short time given to provide results have certainly prevented participants from considering more tests.

testfalconhmatchdssimcomaautomsjhuaplpriorRiMOMOCMnihΣ
benchmark9
anatomy5
jobs6
directory7
food5
conference6
certified
confidence
time5
Participants and the state of their submissions. No system has been certified this year due to lack of resources. Confidence is ticked when given as non boolean value. Time indicates when participants included execution time with their tests.

Like last year, the time devoted for performing these tests (three months) and the period allocated for that (summer) is relatively short and does not really allow the participants to analyse their results and improve their algorithms. On the one hand, this prevents having algorithms really tuned for the contests, on the other hand, this can be frustrating for the participants. The timeline is very difficult to handle, and we should try to allow more time for participating next time (in particular, this paper is written less than one week after having received the results).

The summary of results track by track is provided below.

Detailed evaluation results


http://oaei.ontologymatching.org/2006/results

$Id: index.html,v 1.10 2007/05/21 04:33:00 euzenat Exp $