Ontology Alignment Evaluation Initiative - OAEI-2021 CampaignOAEI OAEI

Details for the crisp reference alignment evaluation for the Conference track within OAEI 2021

Web page content

Evaluation Modalities

Evaluation Based on Crisp Reference Alignments

We have three variants of crisp reference alignments (the confidence values for all matches are 1.0). They contain 21 alignments (test cases), which corresponds to the complete alignment space between 7 ontologies from the OntoFarm data set. This is a subset of all ontologies within this track (16) [4], see OntoFarm data set web page.

For each reference alignment we provide three evaluation variants

rar2 M3 is used as the main reference alignment for this year. It will also be used within the synthesis paper.

ra1ra2rar2
M1ra1-M1ra2-M1rar2-M1
M2ra1-M2ra2-M2rar2-M2
M3ra1-M3ra2-M3rar2-M3

Evaluation Setting and Tables Description

Regarding evaluation based on reference alignment, we first filtered out (from the generated alignments) all instance-to-any_entity and owl:Thing-to-any_entity correspondences prior to computing Precision/Recall/F1-measure/F2-measure/F0.5-measure because they are not contained in the reference alignment. In order to compute average Precision and Recall over all those alignments, we used absolute scores (i.e. we computed precision and recall using absolute scores of TP, FP, and FN across all 21 test cases). This corresponds to micro average precision and recall. Therefore, the resulting numbers can slightly differ with those computed by the SEALS platform (macro average precision and recall). Then, we computed F1-measure in a standard way. Finally, we found the highest average F1-measure with thresholding (if possible).

In order to provide some context for understanding matchers performance, we included two simple string-based matchers as baselines. StringEquiv (before it was called Baseline1) is a string matcher based on string equality applied on local names of entities which were lowercased before (this baseline was also used within anatomy track 2012) and edna (string editing distance matcher) was adopted from benchmark track (wrt. performance it is very similar to the previously used baseline2).

In the tables below, there are results of all 10 tools with regard to all combinations of evaluation variants with crisp reference alignments. There are precision, recall, F1-measure, F2-measure and F0.5-measure computed for the threshold that provides the highest average F1-measure computed for each matcher. F1-measure is the harmonic mean of precision and recall. F2-measure (for beta=2) weights recall higher than precision and F0.5-measure (for beta=0.5) weights precision higher than recall.

ra1-M1

[back to explanation]

ra1-M1

ra1-M2

[back to explanation]

ra1-M2

ra1-M3

[back to explanation]

ra1-M3

ra2-M1

[back to explanation]

ra2-M1

ra2-M2

[back to explanation]

ra2-M2

ra2-M3

[back to explanation]

ra2-M3

rar2-M1

[back to explanation]

rar2-M1

rar2-M2

[back to explanation]

rar2-M2

rar2-M3

[back to explanation]

rar2-M3

Comparison of OAEI 2021 and 2020

Table below summarizes performance results of tools that participated in the last 2 years of OAEI Conference track with regard to reference alignment rar2.

Perfomance results summary OAEI 2021 and 2020

Based on this evaluation, we can see that four of the matching tools did not change the results, one very slightly improved (Wiktionary) and one very slightly decreased (LogMapLt).

Difference between 2021 and 2020 results

Results visualization on precision/recall triangular graph based on the rar2-M3 reference alignment

All tools are visualized in terms of their performance regarding an average F1-measure in the figure below. Tools are represented as squares or triangles. Baselines are represented as circles. Horizontal line depicts level of precision/recall while values of average F1-measure are depicted by areas bordered by corresponding lines F1-measure=0.[5|6|7].

precision/recall triangular graph for conference and F1-measure based on rar2-M3

Discussion for Evaluation Based on Crisp Reference Alignments

With regard to two baselines we can group tools according to matcher's position (above best edna baseline, above StringEquiv baseline, below StringEquiv baseline). Regarding tools position, there are slight differences between ra1-M3 and ra2-M3, and only two changes in rar2-M3 compared to ra2-M3. The first four positions remain unchanged. Comparing ra1-M3 and ra2-M3, 1) from shared 5th-7th position, Wiktionary improved to 5th position and FineTOM to 6th position, 2) LogMapLt decreased from shared 8th to 9th position below ALOD2Vec, and 3) AMD decreased from 10th to shared 10th position. Comparing ra2-M3 and rar2-M3, 1) LogMapLt improved from 9th position back to shared 8th position, and 2) AMD decreased from shared 10th position to 11th position. However, all of these systems performed very closely or equaly compared to each other, and remained in the same group regarding the baselines. In ra1-M3, ra2-M3 and rar2-M3, there are nine matchers above edna baseline (ALOD2Vec, AML, ATMatcher, FineTOM, GMap, LogMap, LogMapLt, TOM, Wiktionary), two matchers above StringEquiv baseline (AMD, LSMatch), and three matchers below StringEquiv baseline (KGMatcher, Lily, OTMapOnto). Since rar2 is not only consistency violation free (as ra2) but also conservativity violation free, we consider the rar2 as main reference alignment for this year. It will also be used within the synthesis paper.

Based on the evaluation variants M1 and M2, five matchers (AMD, ATMatcher, KGMatcher, Lily, LSMatch) do not match properties at all. Naturally, this has a negative effect on overall tools performance within the M3 evaluation variant.

Organizers

References

[1] Michelle Cheatham, Pascal Hitzler: Conference v2.0: An Uncertain Version of the OAEI Conference Benchmark. International Semantic Web Conference (2) 2014: 33-48.

[2] Alessandro Solimando, Ernesto Jiménez-Ruiz, Giovanna Guerrini: Detecting and Correcting Conservativity Principle Violations in Ontology-to-Ontology Mappings. International Semantic Web Conference (2) 2014: 1-16.

[3] Alessandro Solimando, Ernesto Jiménez-Ruiz, Giovanna Guerrini: A Multi-strategy Approach for Detecting and Correcting Conservativity Principle Violations in Ontology Alignments. OWL: Experiences and Directions Workshop 2014 (OWLED 2014). 13-24.

[4] Ondřej Zamazal, Vojtěch Svátek. The Ten-Year OntoFarm and its Fertilization within the Onto-Sphere. Web Semantics: Science, Services and Agents on the World Wide Web, 43, 46-53. 2018.

[5] Martin Šatra, Ondřej Zamazal. Towards Matching of Domain Ontologies to Cross-Domain Ontology: Evaluation Perspective. Ontology Matching 2020.