There are five alignment systems enrolled in the complex track, which are AML, AMLC, AROA, CANARD and Lily. Besides these four systems, we also evaluated other alignment systems. The systems were run on the Hydrography dataset, and output alignments were evaluated as described below.
In this subtrack, the alignments are automatically evaluated over the Hydrography dataset. In order to assess the quality of a mapping, there are two dimensions that we can look into. We can first evaluate if a mapping contains the correct entities that should be involved based on the reference alignment. Another dimension is the relationship between these entities, e.g., equivalence and subsumption. Based on this, we break the evaluation procedure down into three subtasks, which are entity identification, relationship identification, and full complex identification.
For each entity in the source ontology, the alignment system is asked to list all of the entities in the target ontologies that are related to it in some way.
For example:
owl:equivalentClasses(ont1:A1 owl:intersectionOf(ont2:B1 owl:someValuesFrom(ont2:B2 ont2:B3))
The goal in this task is to find the most relevant entities in the ont2 to the class ont1:A1. In this case, the best output would be ont2:B1, ont2:B2, and ont2:B3.
The result is evaluated based on precision, recall, and f-measure.
For each alignment, the system should then endeavor to find the concrete relationships, such as equivalence, subsumption, intersection, value restriction, and so on, that hold between the entities. In terms of the example above, an alignment system needs to eventually determine that the relationship between the two sides is equivalence. Table 1 shows the different similarity that we used in the evaluation for different situations. We do not penalise the incorrect relationship by giving a ZERO value because that would completely neglect the entity identification outputs without considering whether it is a reasonable results or a completely incorrect one.
Table 1. Similarity for Relationship Identification
Found Relation | Correct Relation | Similarity | Comment |
---|---|---|---|
= | = | 1.0 | correct relation |
⊂ | ⊂ | 1.0 | correct relation |
⊃ | ⊃ | 1.0 | correct relation |
⊂ | = | 0.8 | return less information, but correct |
= | ⊃ | 0.8 | return less information, but correct |
⊃ | = | 0.6 | return more information, but incorrect |
= | ⊂ | 0.6 | return more information, but incorrect |
⊂ | ⊃ | 0.3 | incorrect relation |
⊃ | ⊂ | 0.3 | incorrect relation |
This task is a combination of the former two steps. We multiply the results from the entity identification by the similarity of the relations as the relaxed precision, recall, and f-measure. To be accurate, it could also have been better aggregated with other aggregation functions rather than multiplication. [1]
relaxed_precision = entity_precision * similarity of relationship
relaxed_recall = entity_recall * similarity of relationship
relaxed_f-measure = 2 * relaxed_precision * relaxed_recall/ (relaxed_precision + relaxed_recall)
The output alignments as well as the detailed results of the systems over the Hydrography dataset are downloadable here.
Table 2. The Performance of All Alignment Systems on the Hydrography Benchmark
Systems | (1:1) | (1:n) | (m:n) | Cree-SWO | Hydro3-SWO | HydrOntology_native-SWO | HydrOntology_translated-SWO | Total | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
relaxed precision | relaxed recall | relaxed f-measure | relaxed precision | relaxed recall | relaxed f-measure | relaxed precision | relaxed recall | relaxed f-measure | relaxed precision | relaxed recall | relaxed f-measure | relaxed precision | relaxed recall | relaxed f-measure | ||||
reference alignment | 113 | 69 | 15 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
AGM | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
Alin | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
AML | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
AMLC | 8 | 2 | 0 | 0.60 | 0.02 | 0.03 | 0.56 | 0.17 | 0.26 | - | - | - | 0.28 | 0.03 | 0.06 | 0.45 | 0.05 | 0.10 |
AROA | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
CANARD | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
DOME | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
FCAMap-KG | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
Lily | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
LogMap | 10 | 0 | 0 | - | - | - | 0.87 | 0.16 | 0.27 | - | - | - | 0.53 | 0.04 | 0.08 | 0.67 | 0.05 | 0.10 |
LogMapBio | 11 | 0 | 0 | - | - | - | 0.87 | 0.16 | 0.27 | - | - | - | 0.60 | 0.04 | 0.08 | 0.70 | 0.05 | 0.10 |
LogMapKG | 11 | 0 | 0 | - | - | - | 0.87 | 0.16 | 0.27 | - | - | - | 0.60 | 0.04 | 0.08 | 0.67 | 0.05 | 0.10 |
LogMapLt | 11 | 0 | 0 | 0.60 | 0.03 | 0.06 | 0.92 | 0.14 | 0.24 | - | - | - | 0.53 | 0.04 | 0.08 | 0.66 | 0.06 | 0.10 |
ONTMAT1 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
POMAP++ | 6 | 0 | 0 | - | - | - | 0.92 | 0.14 | 0.24 | - | - | - | 0.43 | 0.02 | 0.04 | 0.65 | 0.04 | 0.07 |
Wiktionary | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
[1] Marc Ehrig, and Jérôme Euzenat. "Relaxed precision and recall for ontology matching." K-CAP 2005 Workshop on Integrating Ontologies, Banff, Canada, 2005.