Here are the official results of the Ontology Alignment Evaluation 2006. It is to be presented in Athens (GA US) at the ISWC 2006 Ontology matching workshop.
A synthesis paper in the proceedings of this workshop synthesizes the main results of the 2006 campaign. Here further data and update in the results are available. This page will be the official result of the evaluation.
The papers provided by the participants have been collected in the ISWC 2006 Ontology matching workshop proceedings (PDF) which are also published as CEUR Workshop Proceedings volume 225.
This year again, we had more participants than in previous years (4 in 2004, 7 in 2005 and 10 in 2006). We also noted the increase in tools compliance and robustness: they had less problems to carry the tests and we had less problems to evaluate the results.
We have had no time so far to validate the results which have been provided by the participants. Last year, validating these results has proved feasible so we plan to do it again before the final results (at least for those participants who provided their systems).
We summarize the list of participants in the table below. Similar to last year not all participants provided results for all tests. They usually did those which are easier to run (benchmark, directory and conference). The jobs line coincides with participants who have provided an executable version of their systems. The variety of tests and the short time given to provide results have certainly prevented participants from considering more tests.
test | falcon | hmatch | dssim | coma | automs | jhuapl | prior | RiMOM | OCM | nih | Σ |
benchmark | • | • | • | • | • | • | • | • | • | 9 | |
anatomy | • | • | • | • | • | 5 | |||||
jobs | • | • | • | • | • | • | 6 | ||||
directory | • | • | • | • | • | • | • | 7 | |||
food | • | • | • | • | • | 5 | |||||
conference | • | • | • | • | • | • | 6 | ||||
certified | |||||||||||
confidence | • | • | • | • | • | ||||||
time | • | • | • | • | • | 5 |
Like last year, the time devoted for performing these tests (three months) and the period allocated for that (summer) is relatively short and does not really allow the participants to analyse their results and improve their algorithms. On the one hand, this prevents having algorithms really tuned for the contests, on the other hand, this can be frustrating for the participants. The timeline is very difficult to handle, and we should try to allow more time for participating next time (in particular, this paper is written less than one week after having received the results).
The summary of results track by track is provided below.