The evaluation was done according to two scenarios. For each scenario, this page presents the raw results that were obtained. For further details, we refer the reader to the final report.
The results sent by the three participants are available in the data section of the track.
Participant (links evaluated) | Precision | Coverage |
---|---|---|
DSSim | 93.3 | 68.0 |
Lily | 52.9 | 36.8 |
TaxoMap (exactMatch only) | 88.1 | 41.1 |
TaxoMap (non-exactMatch, strength above 0.5) | 20 (+/- 11!) | NA |
TaxoMap (non-exactMatch, all) | 25.1 (+/- 8.3) | NA |
These are the results for the automated evaluation, using a gold standard of books indexed against both GTT and Brinkman thesauri:
Participant | Precision (book level) | Recall (book level) | Precision (annotation level) | Recall (annotation level) | Jaccard (annotation level) |
---|---|---|---|---|---|
DSSim | 56.55 | 31.55 | 48.73 | 22.46 | 19.98 |
Lily | 43.52 | 15.55 | 39.66 | 10.71 | 9.97 |
TaxoMap (exactMatch) | 52.62 | 19.78 | 47.36 | 13.83 | 12.73 |
TaxoMap (exactMatch + broadMatch) | 46.68 | 19.81 | 40.90 | 13.84 | 12.52 |
TaxoMap (exactMatch + broadMatch + narrowMatch) | 45.57 | 20.23 | 39.51 | 14.12 | 12.67 |
TaxoMap (exactMatch + broadMatch + narrowMatch + relatedMatch) | 45.51 | 20.24 | 39.45 | 14.13 | 12.67 |
Initial location of this page: http://www.few.vu.nl/~aisaac/oaei2008/results.html.