Results of the OAEI 2007 Library Thesaurus Mapping Task

The evaluation was done according to three scenarios. For each scenario, this page presents the raw results that were obtained. For further details, we refer the reader to this report (a longer, more technical version of the Library section included in the OAEI-2007 report) and to the slides presented at the Ontology Matching workshop.

Results for the thesaurus merging scenario

Participant (links evaluated) Precision Coverage
Falcon (exactMatch) 97.25 87.0
Silas (exactMatch) 78.6 66.1
DSSim (exactMatch) 13.4 31

Results for the annotation translation (book re-indexing) scenario

Results for the automated evaluation, using a gold standard of books indexed against both GTT and Brinkman thesauri:
Participant Precision (book level) Recall (book level) Precision (annotation level) Recall (annotation level) Jaccard (annotation level)
Falcon (exactMatch) 65.32 49.21 52.63 36.69 30.76
Silas (exactMatch) 66.05 47.48 53.00 35.12 29.22
DSSim (exactMatch) 18.59 14.34 13.41 9.43 7.54
Silas (exactMatch+relatedMatch) 69.23 59.48 34.20 46.11 24.24

Results for the manual evaluation, performed by KB experts on a subset of 96 books:
Participant Precision (annotation level) Recall (annotation level) Jaccard (annotation level)
Falcon (exactMatch) 74.95 46.40 42.16
Silas (exactMatch) 70.35 39.85 35.46
DSSim (exactMatch) 21.04 12.31 10.10