Several Model-Driven Engineering matching tools have recently appeared. It is time to compare such tools and then provide more guidelines about their use. To this end, we have proposed a model matching track to the OAEI 2011 campaign. Since the OAEI mainly gathers participants from the Ontology community it is the ideal framework to compare not only MDE matching tools each other but also these tools with Ontology systems.
This track proposes a dataset automatically derived from a model-based repository. A model-based repository basically contains metamodels, models, and model transformations. Like ontologies, metamodels are representation formalisms. Models are data instances. Model transformations bridge the gap between the domains represented by metamodels.
Every test case consists of a pair of Ecore metamodels, a pair of OWL-DL ontologies, and a reference alignment (derived from a model transformation). The reference alignment conforms to the OAEI Alignment format which is expressed in RDF/XML.
The dataset has 34 test cases derived from an open source model-based repository contributed by the M2M community. The largest metamodel (referred by the test cases) contains 865 elements (considering classes and properties). Involved metamodels describe a concrete technology (e.g., XML, OWL, UML, SVG, etc.) or a tool (e.g., ATL, CodeClone). Few metamodels describe synthetic problems (e.g., Families).
The reference alignments contain correspondences with confidence equal to 1. This means that, for a concept of metamodel A, the concept of metamodel B is a good correspondence. In a reference alignment, one or more concepts of the metamodel A may be related with one or more concepts of the metamodel B, resulting in different cardinalities 1:1, m:1, 1:n, and m:n. These correspondences have been automatically extracted from model transformations which have been developed by users. Since the users have knowledge about the aligned metamodels, the correspondences make somehow sense. The reference alignments contain 19 correspondences in average.
Download dataset here!
This track has an open evaluation process. In other words, the participants have access to the reference alignments in order to compare the results of their matching algorithms and improve them if necessary. Once the participants are satisfied with the accurancy, they should send the results to the track organizers who evaluate such results. The results have to conform to the OAEI Alignment format, thus, participants have to assume the development of bridges between their own alignment formats and the OAEI one. Some guidelines about how to implement these bridges are given in the section Tool.
The track organizers will inform participants about the evaluation results. From these results the participants are expected to provide the organizers with a paper to be published in the proceedings of the Ontology matching workshop. Details about the paper presentation are given here.
The dataset will be available the 30th May and the results have to be sent to the track organizers the 30th August. The organizers will inform the participants about the evaluation results the 8th September. Participants must send their papers to the track organizers the 4th October.
To those participants requiring a translation from model-based alignments to OAEI alignments expressed in RDF/XML, we propose a two-step solution:
The track organizers would like to thank the AtlanMod research team for the human and technical resources made available to publish this track.
This track is organized by Kelly Garces and Wolfgang Kling. If you have any problems working with the metamodels (or ontologies), or any suggestions related to the model matching track, feel free to write an email to firstname.lastname@example.org and email@example.com