Web page content
This year, there were 12 participants (ALIN, ALIOn /A-LIOn/, AMD, ATMatcher, GraphMatcher, KGMatcher+, LogMap, LogMapLt, LSMatch, Matcha, SEBMatcher, and TOMATO) that managed to generate meaningful output. These are the matchers that were submitted with the ability to run the Conference Track. ALIN, ALIOn, GraphMatcher, Matcha, SEBMatcher, and TOMATO are six new matcheres participating in this year. GraphMatcher and TOMATO have been submitted after the deadline. LSMatch participated with older version (without property and instance matching) in comparsion with participation in other OAEI 2022 tracks. SEBMatcher has been used directly from its endpoint due to problems running it via MELT.
You can download a subset of all alignments for which there is a reference alignment. In this case we provide alignments as generated by the MELT platform (afterwards we applied some tiny modifications which we explained below). Alignments are stored as it follows: SYSTEM-ontology1-ontology2.rdf.
Tools have been evaluated based on
We have three variants of crisp reference alignments (the confidence values for all matches are 1.0). They contain 21 alignments (test cases), which corresponds to the complete alignment space between 7 ontologies from the OntoFarm data set. This is a subset of all ontologies within this track (16) [4], see OntoFarm data set web page
Here, we only publish the results based on the main (blind) reference alignment (rar2-M3). This will also be used within the synthesis paper.
Matcher | Threshold | Precision | F.5-measure | F1-measure | F2-measure | Recall |
---|---|---|---|---|---|---|
SEBMatcher | 0.0 | 0.79 | 0.7 | 0.6 | 0.52 | 0.48 |
StringEquiv | 0.0 | 0.76 | 0.65 | 0.53 | 0.45 | 0.41 |
Matcha | 0.0 | 0.37 | 0.2 | 0.12 | 0.08 | 0.07 |
LogMapLt | 0.0 | 0.68 | 0.62 | 0.56 | 0.5 | 0.47 |
LSMatch | 0.0 | 0.83 | 0.69 | 0.55 | 0.46 | 0.41 |
ATMatcher | 0.0 | 0.69 | 0.64 | 0.59 | 0.54 | 0.51 |
ALIOn | 0.0 | 0.66 | 0.44 | 0.3 | 0.22 | 0.19 |
AMD | 0.0 | 0.82 | 0.68 | 0.55 | 0.46 | 0.41 |
ALIN | 0.0 | 0.82 | 0.7 | 0.57 | 0.48 | 0.44 |
LogMap | 0.0 | 0.76 | 0.71 | 0.64 | 0.59 | 0.56 |
KGMatcher+ | 0.0 | 0.83 | 0.67 | 0.52 | 0.43 | 0.38 |
edna | 0.0 | 0.74 | 0.66 | 0.56 | 0.49 | 0.45 |
GraphMatcher | 0.0 | 0.75 | 0.7 | 0.63 | 0.58 | 0.55 |
TOMATO | 0.0 | 0.09 | 0.11 | 0.16 | 0.28 | 0.6 |
For the crisp reference alignment evaluation you can see more details - for each reference alignment we provide three evaluation variants.
Regarding evaluation based on reference alignment, we first filtered out (from alignments generated using MELT platform) all instance-to-any_entity and owl:Thing-to-any_entity correspondences prior to computing Precision/Recall/F1-measure/F2-measure/F0.5-measure because they are not contained in the reference alignment. In order to compute average Precision and Recall over all those alignments, we used absolute scores (i.e. we computed precision and recall using absolute scores of TP, FP, and FN across all 21 test cases). This corresponds to micro average precision and recall. Therefore, the resulting numbers can slightly differ with those computed by the MELT platform as macro average precision and recall. Then, we computed F1-measure in a standard way. Finally, we found the highest average F1-measure with thresholding (if possible).
In order to provide some context for understanding matchers performance, we included two simple string-based matchers as baselines. StringEquiv (before it was called Baseline1) is a string matcher based on string equality applied on local names of entities which were lowercased before (this baseline was also used within anatomy track 2012) and edna (string editing distance matcher) was adopted from benchmark track (wrt. performance it is very similar to the previously used baseline2).
With regard to two baselines we can group tools according to matcher's position (above best edna baseline, above StringEquiv baseline, below StringEquiv baseline). Regarding tools position, there is slight difference between ra1-M3 and ra2-M3. The first six positions remain unchanged. In ra1-M3, ra2-M3 and rar2-M3, there are six matchers above edna baseline (ALIN, ATMatcher, GraphMatcher, LogMap, LogMapLt, and SEBMatcher), two matchers above StringEquiv baseline (AMD, LSMatch), and four matchers below StringEquiv baseline (ALIOn, KGMatcher+, TOMATO, and Matcha). Since rar2 is not only consistency violation free (as ra2) but also conservativity violation free, we consider the rar2 as main reference alignment for this year. It will also be used within the synthesis paper.
Based on the evaluation variants M1 and M2, seven matchers (AMD, ALIN, ALIOn, ATMatcher, KGMatcher+, LSMatch, and SEBMatcher) do not match properties at all. On the other side, Matcha does not match classes at all, while it dominates in matching properties. Naturally, this has a negative effect on overall tools performance within the M3 evaluation variant.
For the crisp reference alignment evaluation you can see more details - for each reference alignment we provide three evaluation variants.
Table below summarizes performance results of tools that participated in the last 2 years of OAEI Conference track with regard to reference alignment rar2.
Based on this evaluation, we can see that four of the matching tools (KGMatcher+, LogMap, LogMapLt, LSMatch) did not change the results, and two very slightly decreased the performance (AMD, and ATMatcher).
The confidence values of all matches in the standard (sharp) reference alignments for the conference track are all 1.0. For the uncertain version of this track, the confidence value of a match has been set equal to the percentage of a group of people who agreed with the match in question (this uncertain version is based on reference alignment labeled ra1-M3). One key thing to note is that the group was only asked to validate matches that were already present in the existing reference alignments - so some matches had their confidence value reduced from 1.0 to a number near 0, but no new matches were added.
There are two ways that we can evaluate alignment systems according to these ‘uncertain’ reference alignments, which we refer to as discrete and continuous. The discrete evaluation considers any match in the reference alignment with a confidence value of 0.5 or greater to be fully correct and those with a confidence less than 0.5 to be fully incorrect. Similarly, an alignment system’s match is considered a ‘yes’ if the confidence value is greater than or equal to the system’s threshold and a ‘no’ otherwise. In essence, this is the same as the ‘sharp’ evaluation approach, except that some matches have been removed because less than half of the crowdsourcing group agreed with them. The continuous evaluation strategy penalizes an alignment system more if it misses a match on which most people agree than if it misses a more controversial match. For instance, if A = B with a confidence of 0.85 in the reference alignment and an alignment algorithm gives that match a confidence of 0.40, then that is counted as 0.85 * 0.40 = 0.34 of a true positive and 0.85 – 0.40 = 0.45 of a false negative.
Below is a graph showing the F-measure, precision, and recall of the different alignment systems when evaluated using the sharp (s), discrete uncertain (d) and continuous uncertain (c) metrics, along with a table containing the same information. The results from this year show that more systems are assigning nuanced confidence values to the matches they produce.
This year, out of the 12 alignment systems, 8 (ALIN, ALION, AMD, KGMatcher+, LogMapLt, LSMatch, SEBMatcher, TOMATO) use 1.0 as the confidence value for all matches they identify. The remaining 4 systems (ATMatcher, GraphMatcher, LogMap, Matcha) have a wide variation of confidence values.
When comparing the performance of the matchers on the uncertain reference alignments versus that on the sharp version, we see that in the discrete case all matchers performed the same or better in terms of F-measure. Changes in F-measure of discrete cases ranged from 3 to 18 percent over the sharp reference alignment. ALION is the system whose performance surges most (18%), followed by KGMatcher+ (16%) and LSMatch (16%). This was predominantly driven by increased recall, which is a result of the presence of fewer ’controversial’ matches in the uncertain version of the reference alignment.
The performance of the matchers with confidence values always 1.0 is very similar regardless of whether a discrete or continuous evaluation methodology is used, because many of the matches they find are the ones that the experts had high agreement about, while the ones they missed were the more controversial matches. GraphMatcher produces the highest F-measure under both the continuous (72%) and discrete (72%) evaluation methodologies, indicating that this system’s confidence evaluation does a good job of reflecting cohesion among experts on this task. Of the remaining systems, LogMap has relatively small drops in F-measure when moving from discrete to continuous evaluation, while Matcha drops 14 percent in F-measure.
There are 6 systems from this year also participated last year, and thus we are again able to make some comparisons over time. The F-measures of these 6 systems essentially held almost constant when evaluated against the uncertain reference alignments. ALIN, ALION, GraphMatcher, Matcha, SEBMatcher are 5 new systems participating in this year. ALION’s performance increases 18 percent in discrete case and 20 percent in continuous case in terms of F-measure over the sharp reference alignment from 0.34 to 0.40 and 0.41 respectively, which it is mainly driven by increased recall. ALIN, GraphMatcher, and SEBMatcher also perform significantly better in both discrete and continuous cases compared to sharp case in term of F-measure. This is also mostly driven by increased recall. From the results, Matcha outputs low precision and recall among three different versions of reference alignment in general because it assigns the threshold to zero and the matches with relatively high confidence value even the labels of two entities have low string similarity, for example, “hasBid” and “hasPart” has similarity over 0.63 and “addedBy” and “awarded by” also have similarity over 0.66. Reasonably, it achieves slightly better recall from sharp to discrete case (13%), but the precision and F-measure both drop slightly. TOMATO returns better recall in both discrete and continuous cases. But the precision is significantly lower that other systems, because it outputs multiple matches for same entity and assigns the confidence value as 1.0.
For evaluation based on logical reasoning we applied detection of conservativity and consistency principles violations [2, 3]. While consistency principle proposes that correspondences should not lead to unsatisfiable classes in the merged ontology, conservativity principle proposes that correspondences should not introduce new semantic relationships between concepts from one of input ontologies [2].
Table below summarizes statistics per matcher. There are number of alignments (#Incoh.Align.) that cause unsatisfiable TBox after ontologies merge, total number of all conservativity principle violations within all alignments (#TotConser.Viol.) and its average per one alignment (#AvgConser.Viol.), total number of all consistency principle violations (#TotConsist.Viol.) and its average per one alignment (#AvgConsist.Viol.).
Comparing to the last year almost the same number of tools (ALIN, KGMatcher+, LogMap and LSMatch) have no consistency principle violation while five tools have some consistency principle violations. Conservativity principle violations are made by all tools but they have low numbers (less than 100). However, we should note that these conservativity principle violations can be "false positives" since the entailment in the aligned ontology can be correct although it was not derivable in the single input ontologies.
Matcher | #Incoh.Align. | #TotConser.Viol. | #AvgConser.Viol. | #TotConsist.Viol. | #AvgConsist.Viol. |
---|---|---|---|---|---|
ALIN | 0 | 2 | 0.1 | 0 | 0 |
ALIOn | 3 | 17 | 0.81 | 49 | 2.33 |
AMD | 1 | 2 | 0.1 | 6 | 0.29 |
ATMatcher | 1 | 72 | 3.43 | 8 | 0.38 |
GraphMatcher | 6 | 21 | 1 | 61 | 2.9 |
KGMatcher+ | 0 | 1 | 0.05 | 0 | 0 |
LSMatch | 0 | 2 | 0.1 | 0 | 0 |
LogMap | 0 | 21 | 1 | 0 | 0 |
LogMapLt | 3 | 97 | 4.62 | 18 | 0.86 |
Matcha | 2 | 3 | 0.14 | 24 | 1.14 |
SEBMatcher | 4 | 6 | 0.29 | 50 | 2.38 |
TOMATO | 15 | 4 | 0.25 | 777 | 48.56 |
Here we list ten most frequent unsatisfiable classes appeared after ontologies merge by any tool. Five tools generated incoherent alignments.
ekaw#Rejected_Paper - 6 ekaw#Evaluated_Paper - 6 ekaw#Contributed_Talk - 6 ekaw#Camera_Ready_Paper - 6 ekaw#Accepted_Paper - 6 ekaw#Poster_Session - 5 ekaw#Demo_Session - 5 ekaw#Workshop_Session - 4 ekaw#Session - 4 ekaw#Review - 4
Here we list ten most frequent caused new semantic relationships between concepts within input ontologies by any tool:
iasted#Record_of_attendance, http://iasted#City - 9 edas-iasted conference#Invited_speaker, http://conference#Conference_participant - 9 conference-ekaw iasted#Session_chair, http://iasted#Speaker - 3 iasted-sigkdd ekaw-iasted iasted#Session_chair, http://iasted#Reviewer - 3 ekaw-iasted ekaw#Invited_Talk, http://ekaw#Document - 3 conference-ekaw edas#SessionChair, http://edas#Reviewer - 3 edas-ekaw conference#Conference_proceedings, http://conference#Conference_document - 3 conference-ekaw iasted#Video_presentation, http://iasted#Item - 2 conference-iasted iasted#Video_presentation, http://iasted#Document - 2 conference-iasted iasted#Sponzorship, http://iasted#Registration_fee - 2 iasted-sigkdd
For this subtrack we have three experimental test cases with regard to matching the cross-domain DBpedia ontology to three OntoFarm ontologies. We merely focus on entities of DBpedia ontology (dbo) from DBpedia namespace (therefore we prepared filtered DBpedia ontology - this is different than the last year), i.e. http://dbpedia.org/ontology/ and three selected ontologies from OntoFarm: confof, ekaw, sigkdd. as explained in [5].
Out of 12 systems 6 managed to match dbpedia to OntoFarm ontologies (ATMatcher, KGMatcher+, LogMap, LogMapLt, LSMatch, and Matcha).
We evaluated alignments from the systems and the results are in the table below. Additionally, we added two baselines: StringEquiv is a string matcher based on string equality applied on local names of entities which were lowercased and edna (string editing distance matcher).
We can see four systems perform better than two baselines. LogMap dominates with 0.61 of F1-measure. Most systems achieve lower scores of measures than in the case of matching domain ontologies except KGMatcher+. This shows that these test cases are more difficult for traditional ontology matching systems.
[1] Michelle Cheatham, Pascal Hitzler: Conference v2.0: An Uncertain Version of the OAEI Conference Benchmark. International Semantic Web Conference (2) 2014: 33-48.
[2] Alessandro Solimando, Ernesto Jiménez-Ruiz, Giovanna Guerrini: Detecting and Correcting Conservativity Principle Violations in Ontology-to-Ontology Mappings. International Semantic Web Conference (2) 2014: 1-16.
[3] Alessandro Solimando, Ernesto Jiménez-Ruiz, Giovanna Guerrini: A Multi-strategy Approach for Detecting and Correcting Conservativity Principle Violations in Ontology Alignments. OWL: Experiences and Directions Workshop 2014 (OWLED 2014). 13-24.
[4] Ondřej Zamazal, Vojtěch Svátek. The Ten-Year OntoFarm and its Fertilization within the Onto-Sphere. Web Semantics: Science, Services and Agents on the World Wide Web, 43, 46-53. 2018.
[5] Martin Šatra, Ondřej Zamazal. Towards Matching of Domain Ontologies to Cross-Domain Ontology: Evaluation Perspective. Ontology Matching 2020.