Ontology Alignment Evaluation Initiative - OAEI-2015 CampaignOAEI OAEI

SEALS platform evaluation modalities

The SEALS project was dedicated to the evaluation of semantic web technologies. To that extent, it created a platform for easing this evaluation, organizing evaluation campaigns, and building the community of tool providers and tool users around this evaluation activity. The SEALS platform has been progressively integrated within OAEI evaluation. It is now routingly used in ontology matching evaluation.

To ease communication between participants and track organizers, this year we will have a OAEI contact point in the person of Ernesto Jimenez-Ruiz. The role of the contact point is defined below.


Participants have to follow this procedure (some participants have already conducted the first two steps):

  1. Create a user account in the SEALS platform and register your matching tool. Go to the SEALS portal (http://www.seals-project.eu/join-the-community) and create a user account. On the platform Logged in as a user, you can register a tool in the platform (click on the link 'Register your tool'). This requires that you shortly describe your tool and categorize it choosing the menu entry 'Ontology Mapping Tool'. It does not require that you upload the tool itself. Due to some technical reasons the seals-project.eu portal is not yet available
  2. Wrap the current version of your matching tool. This step is described in detail in a tutorial (see Section below). It requires to create a small java class that acts as a bridge. Further, you have to put the libraries used by your tool in a certain folder structure.
  3. Upload a wrapped version of your matching tool and ask for a dry run. Note that an uploaded tool is available via the web pages only for the organizers of the evaluation campaign and for the owner of the tool itself. Due to some technical reasons the seals-project.eu portal is not yet available. If you are interested in a dry-run, please go to the next point and send the details of your tool to the OAEI 2015 contact person.
  4. Inform the contact point that you intend to participate in OAEI 2015. Send an email to the contact point (Ernesto Jimenez-Ruiz) with the following information:
  5. Test your tool with the data sets available for each track. Ids of data sets for testing your tool prior to the evaluation process are given in each track web page. Participants can test their tools with the SEALS client on those data-sets until August 31th. For each track test, please inform the concerned track organizer about the status of your test. This will facilitate the final evaluation process. If you desire, you can send a copy of your e-mail to the OAEI contact point.
  6. Prepare, wrap and upload the final version of your matching tool (Deadline August 31). Many system developers work hard till the final deadline is reached to improve their system. Please wrap the final version of your system as you did for a previous test version in Step 3. For the final evaluation we will use the final version that has been uploaded to the SEALS portal prior to the deadline. Please add a remark in the description of the version to indicate that you want to participate with this version in OAEI 2015, and inform the OAEI 2015 contact person on that.

Previously, some tracks had experienced problems for running all the the tools under the same JDK version. Most participants continue to use JDK 1.6.xx, but new participants tend to use JDK 1.7. To facilitate the evaluation process please try to run your tool under this version (JDK 1.7). If it is not possible for you, please keep us informed.

Once these steps have been conducted, we run all systems on the SEALS platform and generate the results. Each track organizer will decide whether the results will finally be presented via the SEALS portal or if they will be presented via result pages (similar as in the years before), or both.

Tutorial on Tool Wrapping

We have prepared a comprehensive PDF-tutorial for wrapping an ontology matching tool and testing your tool. You can download it here:

In the tutorial there are mentioned several additional materials that are available here.

We detected some problems in the past when using libraries in conflict with the libraries used by the SEALS client. We encourage when possible to use the same versions as the SEALS client. Please check the maven dependencies of the SEALS client here.

We also encourage developers to use the Alignment API. For developers using it, the following ant package is available for packaging and validating the wrapped tools:

Within the Tutorial we show how you can use your wrapped tool to run locally a full evaluation. Thus, you can compute precision and recall for all of those testsuites listed in the track web pages at any time in your development process.

Additional methods for the interactive track

The SEALS client for the interactive track works exactly the same way as for the other tracks. It only includes one additional Class "Oracle" in the "eu.sealsproject.omt.client.interactive" package. This class provides the method "check" which takes two strings and a relation as input (uri1, uri2, relation) which are the URIs of two concepts and the relation can be one of these three: "=" (equivalence), "<" (subsumed-by), ">" (subsumes). The method returns true, if the correspondence between these entities holds and false otherwise.

An example call: Oracle.check("http://cmt#Paper", "http://ekaw#Paper", "=")

For testing purposes you should include the SEALS client as a library. However you must not include the SEALS client jar file within your wrapped system. When starting the client itself, the class should be loaded without problems.

Not all tracks are interactive, to check whether the track is interactive, you can call the method Oracle.isInteractive(). This method returns a boolean value, TRUE if it is an interactive track and false otherwise.

Track specific participation

A system that plans to participate in one of the SEALS supported tracks, will be evaluated for all tracks supported by SEALS. This means that it is no longer allowed to participate in one track, e.g., to participate in just the anatomy track. We know that this can be a problem for some systems that have specifically been developed for, e.g., matching biomedical ontologies. However, this point can still be emphasized in the specific results paper that you have to write about your system. In other words, if the results generated for some specific track are not good at all, there is a place where this can be explained in the appropriate way.

Please note that, a matcher may want to behave differently given what it is provided with as ontologies; however, this should not be based on features specific of the tracks (e.g., there is a specific string in the URL, or a specific class name) but on features of the ontologies (e.g., there are no instances or labels are in German).


Do not hesitate to contact Ernesto Jimenez-Ruiz : ernesto [.] jimenez [.] ruiz [at] gmail [.] com for any questions, which can be related to the overall procedure, to problems in tool-wrapping, and so on ... and do not forget to send us your evaluation request (the earlier, the better)!


While developing and improving the tutorial, we have been in contact with several matching tool developer to have some 'reference' matcher for testing our tutorial and the client that comes with it. Thanks go out to Hua Wei Khong Watson (Eff2match), Peigang Xu (Falcon-OA), Faycal Hamdi (Taxomap), Peng Wang (Lily), Zhichun Wang (RiMOM), and Cosmin Stroe (AgreementMaker).