Opposed to previous evaluation campaigns we have not listed which matching systems has been participating in which track, because we have evaluated each system on each track. Sometimes a system could not generate results for this track, however, this can also be counted as a result. Overall we have executed 19 systems, links to the pages describing the results for each track can be found at the bottom of the page.
Matching system | version for OAEI 2011 | in between | version for OAEI 2011.5 | country |
AgrMaker | • | US | ||
Aroma- | • | France | ||
AUTOMSv2 | • | Finland | ||
CIDER | • | Spain | ||
CODI | • | • | • | Germany |
CSA | • | Vietnam | ||
GOMMA | • | Germany | ||
Hertuda | • | Germany | ||
LDOA | • | * | Tunisia | |
Lily | • | China | ||
LogMap | • | • | • | UK |
LogMapLt | • | UK | ||
MaasMtch | • | • | Netherlands | |
MapEVO | • | • | Germany | |
MapPSO | • | • | Germany | |
MapSSS | • | • | US | |
Optima | • | US | ||
WeSeEMtch | • | Germany | ||
YAM++ | • | • | France | |
TOTAL = 19 | 14 | 4 | 10 |
We have not included the OAEI 2011 systems Serimi and Zhishi.links in our evaluation because they are only designed for the instance maching task. Moreover, we have excluded OACAS and OMR because technical problems prevented us from executing them (we already had similar problems in OAEI 2011). Several days after the final submission deadline a new version of LDOA appeared (marked by *) that could not be included in the final evaluation. We executed for each tool the latest version available.
An exception has been made for the Benchmarks track. In this track some dataset have been used not know to the participants (blind tests). In case of execution problems the tool developer have been informed on the problem the upload of a subsequent bugfix was possible.
The detailed results are available from here:
In case of further questions, directly contact the organizers of the specific track. Contact information can be found at the bottom of each linked results page.