The SEALS project is dedicated to the evaluation of semantic web technologies. To that extent, it created a platform for easing this evaluation, organizing evaluation campaigns, and building the community of tool providers and tool users around this evaluation activity.
OAEI and SEALS are closely coordinated in the area of ontology matching. The SEALS platform covers other areas as well. The SEALS platform has been progressively integrated within OAEI evaluation. Starting in 2010, three tracks (benchmark, anatomy, conference) have been supported by SEALS with the use of a web based evaluation service. In 2011 we went a step further and deployed and executed most of the tools on the SEALS platform. In 2011.5, the evaluation was almost fully executed on the SEALS platform; the following tracks were conducted in this modality: benchmark, anatomy, conference, multifarm and scalability tests. These same tracks plus the library track will be executed this way in 2012.
To ease communication between participants and track organizers, this year we will have a OAEI contact point in the person of Jose-Luis : Aguirre # inria : fr. The role of the contact point is defined below.
Participants have to follow this procedure (some participants of OAEI 2010, 2011 and 2011.5 have already conducted the first two steps):
In OAEI 2011.5, some tracks had experienced problems for running all the the tools under the same JDK version. Most participants continue to use JDK 1.6.xx, but new participants tend to use JDK 1.7. To facilitate the evaluation process please try to run your tool under this version (JDK 1.7). If it is not possible for you, please keep us informed.
Once these steps have been conducted, we run all systems on the SEALS platform and generate the results. Each track organizer will decide whether the results will finally be presented via the SEALS portal or if they will be presented via result pages (similar as in the years before), or both.
We have prepared a comprehensive PDF-tutorial for wrapping an ontology matching tool and testing your tool. You can download it here:
In the tutorial there are mentioned several additional materials that are available here.
We detected a few problems with previous versions of the client. We offer now the improved version available above. Note that this requires to refer to "seals-omt-client-v4-1beta.jar" in all command line examples given in the tutorial instead of "seals-omt-client.jar".
We encourage developers to use the Alignment API. For developers using it, the following ant package is available for packaging and validating the wrapped tools:
Within the Tutorial we show how you can use your wrapped tool to run locally a full evaluation. Thus, you can compute precision and recall for all of those testsuites listed in the track web pages at any time in your development process.
Note also that we have modified seals-omt-client.jar compared to the OAEI 2011 version to allow a more flexible way of running testsuites. See Section 5.2 and 5.3 in the tutorial. This modification is fully backwards-compatible, i.e. the new version of the client works with all of the tools wrapped for OAEI 2011. No changes are required!
A system that plans to participate in one of the SEALS supported tracks, will be evaluated for all tracks supported by SEALS. This means that it is no longer allowed to participate in one track, e.g., to participate in just the anatomy track. We know that this can be a problem for some systems that have specifically been developed for, e.g., matching biomedical ontologies. However, this point can still be emphasized in the specific results paper that you have to write about your system. In other words, if the results generated for some specific track are not good at all, there is a place where this can be explained in the appropriate way.
Do not hesitate to contact jose-luis : aguirre # inria : fr for any questions, which can be related to the overall procedure, to problems in tool-wrapping, and so on ... and do not forget to send us your evaluation request (the earlier, the better)!
While developing and improving the tutorial, we have been in contact with several matching tool developer to have some 'reference' matcher for testing our tutorial and the client that comes with it. Thanks go out to Hua Wei Khong Watson (Eff2match), Peigang Xu (Falcon-OA), Faycal Hamdi (Taxomap), Peng Wang (Lily), Zhichun Wang (RiMOM), and Cosmin Stroe (AgreementMaker).