Ontology Alignment Evaluation Initiative - OAEI-2014 CampaignOAEI OAEI

MultiFarm Results

In this page one can find the results of the OAEI 2014 campaign for the MultiFarm track. The details on this data set can be found at MultiFarm data set. If you notice any kind of error (wrong numbers, incorrect information on a matching system, etc.) do not hesitate to contact us (for the mail see below in the last paragraph on this page).

Experimental setting

For the 2014 campaign, part of the data set has been used for a kind of blind evaluation. This subset include all the pairs of matching tasks involving the edas and ekaw ontologies (resulting in 36x24 matching tasks), which were not used in previous campaigns. We refer to evaluation as edas and ekaw based evaluation in the following. Participants were able to test their systems on the freely available subset of matching tasks (open evaluation) (including reference alignments), available via the SEALS repository, which is composed of 36x25 tasks.

We distinguish two types of matching tasks : (i) those tasks where two different ontologies have been translated into different languages; and (ii) those tasks where the same ontology has been translated into different languages. For the tasks of type (ii), good results are not directly related to the use of specific techniques for dealing with ontologies in different natural languages, but on the ability to exploit the fact that both ontologies have an identical structure.

This year, only 3 systems use specific cross-lingual methods: AML, LogMap and XMap. All of them integrate a translation module in their implementations. LogMap uses Google Translator API and pre-compiles a local dictionary in order to avoid multiple accesses to the Google server within the matching process. AML and XMap use Microsoft Translator, and AML adopts the same strategy of LogMap computing a local dictionary. The translation step is performed before the matching step itself.

Evaluation results

Open evaluation

Runtime

For both settings, the systems have been executed on a Debian Linux VM configured with four processors and 20GB of RAM running under a Dell PowerEdge T610 with 2*Intel Xeon Quad Core 2.26GHz E5607 processors, under Linux ProxMox 2 (Debian). All measurements are based on a single run. Some exceptions were observed for MaasMtch, and it was not able to be executed under the same setting than the other systems. Thus, we do not report on execution time for this system.

Overall results

The table below present the aggregated results for the open subset of MultiFarm, for the test cases of type (i) and (ii). These results have been computed using the Alignment API 4.6. We did not distinguish empty and erroneous alignments. We observe significant differences between the results obtained for each type of matching task, in terms of precision, for all systems, with lower differences in terms of recall. As we could expect, all systems implementing specific cross-lingual techniques generate the best results for test cases of type (i). A similar behavior has also been observed for the tests cases of type (ii), even if the specific strategies could have less impact due to the fact that the identical structure of the ontologies could also be exploited instead by the other systems. For cases of type (i), while LogMap has the best precision (in detriment of recall), AML has similar results both in terms of precision and recall and outperforms the other systems in terms of F-measure (for both cases). The reader can refer to the OAEI paper for a more detailed discussion on these results.

MultiFarm aggregated results per matcher (average), for each type of matching task -- different ontologies (i) and same ontologies (ii). Time is measured in minutes (time for completing the 36x25 matching tasks). Size indicates the average of the number of generated correspondences for each test type.
Different ontologies (i) Same ontologies (ii)
System Size Precision F-measure Recall Size Precision F-measure Recall
Specific cross-lingual matchers AML 11.40 .57 .54 .53 54.89 .95 .62 .48
LogMap 5.04 .80 .40 .28 36.07 .94 .41 .27
XMap 110.79 .31 .35 .43 67.75 .76 .50 .40
Non-specific matchers AOT 106.29 .02 .04 .17 109.79 .11 .12 .12
AOTL 1.86 .10 .03 .02 2.65 .27 .02 .01
LogMap-C 1.30 .15 .04 .02 3.52 .31 .02 .01
LogMapLt 1.73 .13 .04 .02 3.65 .25 .02 .01
MaasMtch 3.16 .27 .15 .10 7.71 .52 .10 .06
RSDLWB 1.31 .16 .04 .02 2.41 .34 .02 .01

Language specific results (type i)

Table below presents the results per pair of language, involving matching different ontologies (test cases of type i). As already observed for the best system last year (YAM++), the best results in terms of F-measure for AML has been observed for the pairs involving Czech -- cz-en (.63), cz-ru (.63), cz-es (.61), cz-nl (.60) -- followed of pairs involving English and Russian -- en-ru (.60). In the case of LogMap, for pairs involving English, Spanish -- en-es (.61) -- and Czech -- cz-en (.60) -- it generates its best scores, followed by en-pt (.56) and de-en (.56). As AML, top F-measure results for XMap are observed for the pair involving Czech -- cz-es (.50), cz-fr (.47), cz-pt (.46). However, when dealing with cases of type (ii), these systems generate best results for the pairs involving English, French, Portuguese and Spanish (including Dutch for LogMap).

For non-specific systems, most of them cannot deal with Chinese and Russian languages. All of them generate their best results for the pairs es-pt and de-en: AOT (es-pt .10), AOTL (de-en .19), LogMap-C (de-en .20), LogMap-Lt (es-pt .23) MaasMtch (de-en .37) and RSDLWB (es-pt .23), followed by es-fr, en-es and fr-nl. These systems take advantage of the similarities in the vocabulary for these languages in the matching task, in the absence of specific strategies. A similar result has been observed last year for non-specific systems, where 7 out of 10 cross-lingual systems generated their best results for the pair es-pt, followed by the pair de-en. On the other hand, although it is likely harder to find correspondences between cz-pt than es-pt, for some systems Czech is on pairs for the top-5 F-measure (cz-pt, for LogMap-C, LogMapLite and RSDLWB or cz-es for AOTL, LogMapLite and RSDLWB). It can be explained by the specific way systems combine their internal matching techniques (ontology structure, reasoning, coherence, linguistic similarities, etc).

AML AOT AOTL LogMap LogMapC LogMapLt MaasMtch RSDLWB XMap
Size F-m. Size F-m. Size F-m. Size F-m. Size F-m. Size F-m. Size F-m. Size F-m. Size F-m.
cn-cz 7.25.45106.30.011.30.002.05.241.00.001.40.001.00.001.00.0014.05.29
cn-de 8.35.49106.30.011.30.001.35.171.00.001.40.001.05.001.00.0014.65.28
cn-en 9.95.55106.30.001.30.001.85.231.00.001.40.001.10.001.00.0015.50.26
cn-es 1.05.54106.30.001.30.001.85.201.00.001.40.001.00.001.00.0013.60.30
cn-fr 9.65.49106.30.011.30.002.05.191.00.001.40.001.00.001.00.0086.10.01
cn-nl 6.20.38106.30.011.30.001.40.181.00.001.40.001.10.001.00.0015.25.25
cn-pt 9.65.51106.30.011.30.001.75.231.00.001.40.001.00.001.00.0012.80.25
cn-ru 3.60.40106.30.011.30.002.50.271.00.001.40.001.00.001.00.003145.15.01
cz-de 11.25.53106.25.041.45.025.75.461.60.062.15.094.00.261.75.0915.90.41
cz-en 13.15.63106.30.052.00.047.90.601.25.041.70.044.65.281.30.0418.70.41
cz-es 13.40.61106.30.042.10.117.65.471.50.072.20.113.85.251.80.1116.70.50
cz-fr 13.00.55106.30.041.40.015.05.441.00.001.50.012.90.171.10.0116.75.47
cz-nl 12.20.60106.30.051.55.047.25.521.15.021.65.044.65.271.25.0414.40.02
cz-pt 13.25.59106.30.041.90.086.95.511.60.092.35.133.20.221.95.1316.05.46
cz-ru 1.70.63106.30.011.30.005.65.471.00.001.40.001.00.001.00.0017.25.41
de-en 12.85.56106.30.064.35.196.75.562.50.202.90.207.15.372.50.2017.00.40
de-es 12.70.52106.25.051.40.005.50.431.85.091.80.063.90.281.40.0613.45.43
de-fr 13.35.51106.30.041.65.043.75.341.15.021.65.044.05.241.25.0414.45.41
de-nl 1.25.49106.30.061.90.045.15.441.25.031.75.045.05.251.35.0414.90.33
de-pt 12.00.50106.30.031.70.044.95.441.40.061.90.072.65.171.50.0713.45.39
de-ru 8.55.48106.25.011.30.003.40.321.00.001.40.001.05.001.00.0016.30.36
en-es 13.60.59106.30.072.70.048.50.611.70.101.70.045.85.331.30.0415.75.46
en-fr 13.75.54106.30.064.40.105.60.501.55.061.80.046.65.331.25.0417.30.42
en-nl 12.30.57106.25.073.95.077.10.531.65.052.30.106.35.331.60.0718.25.38
en-pt 13.20.58106.30.072.60.057.30.561.40.061.85.063.85.231.40.0616.45.44
en-ru 11.20.60106.30.011.30.004.65.371.00.001.40.001.30.001.00.0021.30.38
es-fr 14.55.57106.30.091.60.015.95.451.40.061.50.014.85.301.10.0193.85.02
es-nl 13.35.59106.30.051.40.007.15.411.00.001.40.004.45.201.00.0016.40.45
es-pt 13.45.57106.30.103.40.188.20.513.00.203.95.239.35.363.55.2318.35.43
es-ru 12.05.55106.30.001.30.006.55.431.00.001.40.001.00.001.00.0017.30.43
fr-nl 12.60.55106.30.063.15.124.45.401.90.112.40.124.65.292.00.1317.70.42
fr-pt 14.05.55106.30.061.50.005.35.481.00.001.40.003.15.161.00.0017.55.43
fr-ru 11.50.53106.30.001.30.003.95.361.00.001.40.001.00.001.00.0017.25.39
nl-pt 12.65.57106.30.041.55.016.00.461.10.011.50.013.00.071.10.0116.25.42
nl-ru 9.90.52106.30.011.30.004.95.381.00.001.40.001.10.001.00.0019.40.41
pt-ru 11.00.52106.30.001.30.005.25.411.00.001.40.001.00.001.00.0016.95.40
MultiFarm results per pair of languages (H-mean for F-measure) and size of generated alignments (average), for the test cases of type (i)

Comparison with previous years

Table below presents a comparison, in terms of F-measure, of the systems implementing some cross-lingual strategy in at least one OAEI campaign. For the results marked with one *, the corresponding system version has not implemented specific strategies for the corresponding year. Best F-measures for cases (i) and (ii) over the years are indicated in bold face.

Generated alignments and additional table of results

You can download the complete set of generated alignments. These alignments have been generated by executing the tools with the help of the SEALS infrastructure. All results presented above were based on these alignments. You can download as well additional tables of results (including precision and recall for each pair of languages), for both types of matching task (i) and (ii).

Edas and ekaw based evaluation

Overall results

This year we have included edas and ekaw in a (pseudo) blind setting. In fact, this subset was, two years ago, by error, available on the MultiFarm web page. Since that, we have removed it from there and it is not available as well for the participants via the SEALS repositories. However, we can not guarantee that the participants have not used this data set for their tests.

We evaluate this subset on the systems implementing specific cross-lingual strategies. The tools run in the SEALS platform using locally stored ontologies. Table below presents the results for AML and LogMap. Using this setting, XMap has launched exceptions for most pairs and its results are not reported for this subset. These internal exceptions were due to the fact that the system exceeded the limit of accesses to the translator. While AML includes in its local dictionaries the automatic translations for the two ontologies, it is not the case for LogMap (real blind case). This can explain the similar results obtained by AML in both settings. However, LogMap has encountered many problems for accessing Google translation server from our server, what explain the decrease in its results and the increase in runtime (besides the fact that this data set is slightly bigger than the open data set in terms of ontology elements). Overall, for cases of type (i) -- remarking the particular case of AML -- the systems maintained their performance with respect to the open setting.

MultiFarm aggregated results per matcher for the edas and ekaw based evaluation, for each type of matching task -- different ontologies (i) and same ontologies (ii). Time, in minutes, for completing the 36x24 matching task.
Different ontologies (i) Same ontologies (ii)
System Time Size Precision F-measure Recall Size Precision F-measure Recall
AML 14 12.82 .55 .47 .42 64.59 .94 .62 .46
LogMap 219 5.21 .77 .33 .22 71.13 .19 .14 .11
XMap - - - - - - - - -

Generated alignments and additional table of results

You can download the complete set of generated alignments for the blind evaluation. These alignments have been generated by executing the tools with the help of the SEALS infrastructure. All results presented above were based on these alignments. You can download as well additional tables of results (including precision and recall for each pair of languages), for both types of matching task (i) and (ii).

References

[1] Christian Meilicke, Raul Garcia-Castro, Fred Freitas, Willem Robert van Hage, Elena Montiel-Ponsoda, Ryan Ribeiro de Azevedo, Heiner Stuckenschmidt, Ondrej Svab-Zamazal, Vojtech Svatek, Andrei Tamilin, Cassia Trojahn, Shenghui Wang. MultiFarm: A Benchmark for Multilingual Ontology Matching. Accepted for publication at the Journal of Web Semantics.

An authors version of the paper can be found at the MultiFarm homepage, where the data set is described in details.

Contact

This track is organised by Cassia Trojahn dos Santos, with the help of Roger Granada in 2014. If you have any problems working with the ontologies, any questions or suggestions, feel free to write an email to cassia [.] trojahn [at] irit [.] fr.