Ontology Alignment Evaluation Initiative - SEALS platform evaluation modalities for OAEI 2016 Campaign

SEALS evaluation for OAEI 2016

The SEALS project was dedicated to the evaluation of semantic web technologies. To that extent, it created infrastructure for easing this evaluation, organizing evaluation campaigns, and building the community of tool providers and tool users around this evaluation activity. The SEALS infrastructure is now routingly used in ontology matching evaluation.

To ease communication between participants and track organizers, we will have two OAEI contact points: Ernesto Jimenez-Ruiz (ernesto [.] jimenez [.] ruiz [at] gmail [.] com) and Daniel Faria (daniel [.] faria [.] 81 [at] gmail [.] com). General questions about the OAEI campaign (deadlines, registration, tool wrapping, submissions, rules, datasets, etc.) should be addressed to Ernesto. Technical questions about the use of the SEALS client and potential library incompatibilities should be addressed to Daniel.

Process

Participants have to follow this procedure:

  1. Register your matching tool. Participants must register their intention to participate in the OAEI 2016 by filling this online form.
  2. Save instructions to upload your tool. After registering a system the OAEI contact person will send details about how to submit your system.
  3. Wrap the current version of your matching tool. This step is described in detail in a tutorial (see Section below). It requires to create a small java class that acts as a bridge. Further, you have to put the libraries used by your tool in a certain folder structure.
  4. Test your tool with the data sets available for each track. Ids of data sets for testing your tool prior to the evaluation process are given in each track web page. Participants can test their tools with the SEALS client on those data-sets. For each track test, please inform the concerned track organizer about the status of your test. This will facilitate the final evaluation process. If you desire, you can send a copy of your e-mail to the OAEI contact point.
  5. Upload a wrapped version of your matching tool and ask for a dry run. It is convenient that participants (locally) test their wrapped tool way before the deadline. We can also give you support and perform a dry run in our machines.
  6. Prepare, wrap and upload the final version of your matching tool. Many system developers work hard till the final deadline is reached to improve their system. Please wrap the final version of your system as you did for a previous test version in Step 3. For the final evaluation we will use the latest version that has been uploaded prior to the deadline.

Previously, some tracks had experienced problems for running all the the tools under the same JDK version. Some participants continue to use JDK 1.6.xx, but new participants tend to use JDK 1.7. or even 1.8. To facilitate the evaluation process please try to run your tool under this version (JDK 1.7). If it is not possible for you, please keep us informed.

Once these steps have been conducted, we run all systems on the SEALS infrastructure and generate the results.

Tutorial on Tool Wrapping

We have prepared a comprehensive PDF-tutorial for wrapping an ontology matching tool and testing your tool. You can download it here:

In the tutorial there are mentioned several additional materials that are available here.

We have also prepared a tutorial to communicate with the Oracle in the interactive track:

We detected some problems in the past when using libraries in conflict with the libraries used by the SEALS client. We encourage when possible to use the same versions as the SEALS client. Please check the maven dependencies of the SEALS client here. Note that you must not include the SEALS client jar file within your wrapped system.

We also encourage developers to use the Alignment API. For developers using it, the following ant package is available for packaging and validating the wrapped tools:

Within the Tutorial we show how you can use your wrapped tool to run locally a full evaluation. Thus, you can compute precision and recall for all of those testsuites listed in the track web pages at any time in your development process.

Contacts

Do not hesitate to contact Ernesto Jimenez-Ruiz (ernesto [.] jimenez [.] ruiz [at] gmail [.] com) and Daniel Faria (daniel [.] faria [.] 81 [at] gmail [.] com) for any question you may have.

Acknowledgement

While developing and improving the tutorial, we have been in contact with several matching tool developer to have some 'reference' matcher for testing our tutorial and the client that comes with it. Thanks go out to Hua Wei Khong Watson (Eff2match), Peigang Xu (Falcon-OA), Faycal Hamdi (Taxomap), Peng Wang (Lily), Zhichun Wang (RiMOM), and Cosmin Stroe (AgreementMaker).