Ontology Alignment Evaluation Initiative - OAEI-2018 CampaignOAEI OAEI

Results of Evaluation for the Conference track within OAEI 2018

Web page content

Participants

This year, there are 12 participants (ALIN, ALOD2Vec, AML, DOME, FCAMapX, Holontology, KEPLER, Lily, LogMap, LogMapLt, SANOM and XMap) which managed to generate meaningful output. These are the matchers that were submitted with the ability to run the Conference Track. We also provide comparison with tools that participated in previous years of OAEI in terms of highest average F1-measure.

Data

Participants alignments

You can download a subset of all alignments for which there is a reference alignment. In this case we provide alignments as generated by the SEALS platform (afterwards we applied some tiny modifications which we explained below). Alignments are stored as it follows: matcher-ontology1-ontology2.rdf.

Evaluation modalities

Tools have been evaluated based on

Evaluation based on crisp reference alignments

We have three variants of crisp reference alignments (the confidence values for all matches are 1.0). They contain 21 alignments (test cases), which corresponds to the complete alignment space between 7 ontologies from the OntoFarm data set. This is a subset of all ontologies within this track (16) [4], see OntoFarm data set web page.

For each reference alignment we provide three evaluation variants

rar2 M3 is used as the main reference alignment for this year. It will also be used within the synthesis paper.

ra1ra2rar2
M1ra1-M1ra2-M1rar2-M1
M2ra1-M2ra2-M2rar2-M2
M3ra1-M3ra2-M3rar2-M3

Evaluation setting and tables description

Regarding evaluation based on reference alignment, we first filtered out (from alignments generated using SEALS platform) all instance-to-any_entity and owl:Thing-to-any_entity correspondences prior to computing Precision/Recall/F1-measure/F2-measure/F0.5-measure because they are not contained in the reference alignment. In order to compute average Precision and Recall over all those alignments we used absolute scores (i.e. we computed precision and recall using absolute scores of TP, FP, and FN across all 21 test cases). This corresponds to micro average precision and recall. Therefore, the resulting numbers can slightly differ with those computed by the SEALS platform (macro average precision and recall). Then, we computed F1-measure in a standard way. Finally, we found the highest average F1-measure with thresholding (if possible).

In order to provide some context for understanding matchers performance we included two simple string-based matchers as baselines. StringEquiv (before it was called Baseline1) is a string matcher based on string equality applied on local names of entities which were lowercased before (this baseline was also used within anatomy track 2012) and edna (string editing distance matcher) was adopted from benchmark track (wrt. performance it is very similar to the previously used baseline2).

In the tables below, there are results of all 11 tools with regard to all combinations of evaluation variants with crisp reference alignments. There are precision, recall, F1-measure, F2-measure and F0.5-measure computed for the threshold that provides the highest average F1-measure computed for each matcher. F1-measure is the harmonic mean of precision and recall. F2-measure (for beta=2) weights recall higher than precision and F0.5-measure (for beta=0.5) weights precision higher than recall.

ra1-M1

[back to explanation]

ra1-M1

ra1-M2

[back to explanation]

ra1-M2

ra1-M3

[back to explanation]

ra1-M3

ra2-M1

[back to explanation]

ra2-M1

ra2-M2

[back to explanation]

ra2-M2

ra2-M3

[back to explanation]

ra2-M3

rar2-M1

[back to explanation]

rar2-M1

rar2-M2

[back to explanation]

rar2-M2

rar2-M3

[back to explanation]

rar2-M3

Comparison of OAEI 2018 and 2017

Table below summarizes performance results of tools that participated in the last 2 years of OAEI, conference track with regard to reference alignment rar2.

Perfomance results summary OAEI 2018 and 2017

Based on this evaluation, we can see that six of the matching tools did not change the results (or changed slightly in the case of ALIN and XMap). SANOM showed a bigger change in improving its F1-measure and recall.

Difference between 2018 and 2017 results

Results visualization on precision/recall triangular graph based on the rar2-M3 reference alignment

All tools are visualized in terms of their performance regarding an average F1-measure in the figure below. Tools are represented as squares or triangles. Baselines are represented as circles. Horizontal line depicts level of precision/recall while values of average F1-measure are depicted by areas bordered by corresponding lines F1-measure=0.[5|6|7].

precision/recall triangular graph for conference and F1-measure based on rar2-M3

Discussion for evaluation based on crisp reference alignments

With regard to two baselines we can group tools according to matcher's position (above best edna baseline, above StringEquiv baseline, below StringEquiv baseline). Regarding tools position, there are slight differences between ra1-M3, ra2-M3 and rar2-M3. In ra2-M3, Holontology improved from "above StringEquiv baseline" to "above best edna baseline" and Lily improved from "below StringEquiv baseline" to "above StringEquiv baseline". In rar-M3, ALIN descended from "above best edna baseline" to "above StringEquiv baseline", compared to ra1-M3 and ra2-M3. In ra1-M3, there are seven matchers above edna baseline (ALIN, AML, SANOM, LogMap, XMap, FCAMapX and DOME) and four matchers above StringEquiv baseline (LogMapLt, KEPLER, Holontology and ALOD2Vec). LogMap has the largest drop (by 0.06 of F-Measure) between ra2-M3 and ra1-M3. Since rar2 is not only consistency violation free (as ra2) but also conservativity violation free we consider the rar2 as main reference alignment for this year. It will also be used within the synthesis paper.

Based on the evaluation variants M1 and M2, two matchers (ALIN and Lily) do not match properties at all. Of course, this has a negative effect on overall tools performance within the M3 evaluation variant.

Evaluation based on the uncertain version of the reference alignment

Evaluation setting

The confidence values of all matches in the standard (sharp) reference alignments for the conference track are all 1.0. For the uncertain version of this track, the confidence value of a match has been set equal to the percentage of a group of people who agreed with the match in question (this uncertain version is based on reference alignment labeled ra1). One key thing to note is that the group was only asked to validate matches that were already present in the existing reference alignments - so some matches had their confidence value reduced from 1.0 to a number near 0, but no new matches were added.

There are two ways that we can evaluate alignment systems according to these 'uncertain' reference alignments, which we refer to as discrete and continuous. The discrete evaluation considers any match in the reference alignment with a confidence value of 0.5 or greater to be fully correct and those with a confidence less than 0.5 to be fully incorrect. Similarly, an alignment system’s match is considered a 'yes' if the confidence value is greater than or equal to the system’s threshold and a 'no' otherwise. In essence, this is the same as the 'sharp' evaluation approach, except that some matches have been removed because less than half of the crowdsourcing group agreed with them. The continuous evaluation strategy penalizes an alignment system more if it misses a match on which most people agree than if it misses a more controversial match. For instance, if A = B with a confidence of 0.85 in the reference alignment and an alignment algorithm gives that match a confidence of 0.40, then that is counted as 0.85 * 0.40 = 0.34 of a true positive and 0.85 – 0.40 = 0.45 of a false negative.

Results

Below is a graph showing the F-measure, precision, and recall of the different alignment systems when evaluated using the sharp (s), discrete uncertain (d) and continuous uncertain (c) metrics, along with a table containing the same information. The results from this year show that more systems are assigning nuanced confidence values to the matches they produce.

graph for uncertain reference alignment based evalation results for uncertain reference alignment based evalation

Out of the 12 alignment systems, five (ALIN, ALOD2Vec, DOME, FCAMapX, Holontology, LogMapLt) use 1.0 as the confidence value for all matches they identify. The remaining six systems (AML, KEPLER, Lily, LogMap, SANOM and XMap) have a wide variation of confidence values.

Discussion for evaluation based on the uncertain reference alignments

When comparing the performance of the matchers on the uncertain reference alignments versus that on the sharp version, we see that in the discrete case all matchers except Lily performed the same or better in terms of F-measure (Lily's F-measure dropped by one one-hundredth). Change in F-measure ranged from -1 to 15 percent over the sharp reference alignment. This was predominantly driven by increased recall, which is a result of the presence of fewer 'controversial' matches in the uncertain version of the reference alignment.

The performance of the matchers with confidence values always 1.0 is very similar regardless of whether a discrete or continuous evaluation methodology is used, because many of the matches they find are the ones that the experts had high agreement about, while the ones they missed were the more controversial matches. AML produces a fairly wide range of confidence values and has the highest F-measure under both the continuous and discrete evaluation methodologies, indicating that this system's confidence evaluation does a good job of reflecting cohesion among experts on this task. Of the remaining systems, four (KEPLER, LogMap, SANOM and XMap) have relatively small drops in F-measure when moving from discrete to continuous evaluation. Lily's performance drops drastically under the continuous evaluation methodology. This is because the matcher assigns low confidence values to some matches in which the labels are equivalent strings, which many crowdsourcers agreed with unless there was a compelling technical reason not to. This hurts recall, but using a low threshold value in the discrete version of the evaluation metrics 'hides' this problem.

Six systems from this year also participated last year, and thus we are again able to make some comparisons over time. The F-measures of five of these systems essentially held constant when evaluated against the uncertain reference alignments. The exceptions were ALIN and SANOM, whose performance improved drastically.

Evaluation based on logical reasoning

For evaluation based on logical reasoning we applied detection of conservativity and consistency principles violations [2, 3]. While consistency principle proposes that correspondences should not lead to unsatisfiable classes in the merged ontology, conservativity principle proposes that correspondences should not introduce new semantic relationships between concepts from one of input ontologies [2].

Table below summarizes statistics per matcher. There are number of all alignments (#Align.), number of alignments (#Incoh.Align.) that cause unsatisfiable TBox after ontologies merge, total number of all conservativity principle violations within all alignments (#TotConser.Viol.) and its average per one alignment (#AvgConser.Viol.), total number of all consistency principle violations (#TotConsist.Viol.) and its average per one alignment (#AvgConsist.Viol.).

Only three tools (ALIN, AML and LogMap) have no consistency principle violation (in comparison to five tools last year and seven tools two years ago). This year all tools have some conservativity principle violations (in comparison to one tool having no conservativity principle violation last year). We should note that these conservativity principle violations can be "false positives" since the entailment in the aligned ontology can be correct although it was not derivable in the single input ontologies.

Matcher #Align. #Incoh.Align. #TotConser.Viol. #AvgConser.Viol. #TotConsist.Viol. #AvgConsist.Viol.
ALIN 21 0 2 0.1 0 0
ALOD2Vec 21 6 124 5.9 27 1.29
AML 21 0 39 1.86 0 0
DOME 21 3 106 5.05 10 0.48
FCAMapX 21 11 124 5.9 273 13
Holontology 21 3 66 3.14 10 0.48
KEPLER 21 12 123 5.86 159 7.57
Lily 21 9 140 7 124 6.2
LogMap 21 0 25 1.19 0 0
LogMapLt 21 5 96 4.57 25 1.19
SANOM 21 9 103 5.15 92 4.6
XMap 21 4 53 2.65 14 0.7

Statistics of consistency and conservativity principle violations

Here we list ten most frequent unsatisfiable classes appeared after ontologies merge by any tool. Five tools generated incoherent alignments.

ekaw#Contributed_Talk - 9
ekaw#Camera_Ready_Paper - 9
ekaw#Industrial_Session - 6
ekaw#Conference_Session - 6
edas#TwoLevelConference - 6
edas#SingleLevelConference - 6
edas#ConferenceSession - 6
edas#Conference - 6
cmt#Conference - 6
sigkdd#Conference - 5

Here we list ten most frequent caused new semantic relationships between concepts within input ontologies by any tool:

conference#Invited_speaker, http://conference#Conference_participant> - 11
	conference-sigkdd
	conference-ekaw
iasted#Record_of_attendance, http://iasted#City> - 9
	edas-iasted
iasted#Session_chair, http://iasted#Speaker> - 8
	ekaw-iasted
	iasted-sigkdd
iasted#Sponzorship, http://iasted#Registration_fee> - 7
	iasted-sigkdd
iasted#Sponzorship, http://iasted#Fee> - 7
	iasted-sigkdd
iasted#Hotel_fee, http://iasted#Registration_fee> - 7
	iasted-sigkdd
iasted#Fee_for_extra_trip, http://iasted#Registration_fee> - 7
	iasted-sigkdd
conference#Tutorial, http://conference#Conference_document> - 7
	conference-ekaw
	conference-iasted
conference#Tutorial, http://conference#Conference_contribution> - 7
	conference-ekaw
	conference-iasted
sigkdd#Program_Chair, http://sigkdd#Program_Committee_member> - 6
	cmt-sigkdd

Organizers

References

[1] Michelle Cheatham, Pascal Hitzler: Conference v2.0: An Uncertain Version of the OAEI Conference Benchmark. International Semantic Web Conference (2) 2014: 33-48.

[2] Alessandro Solimando, Ernesto Jiménez-Ruiz, Giovanna Guerrini: Detecting and Correcting Conservativity Principle Violations in Ontology-to-Ontology Mappings. International Semantic Web Conference (2) 2014: 1-16.

[3] Alessandro Solimando, Ernesto Jiménez-Ruiz, Giovanna Guerrini: A Multi-strategy Approach for Detecting and Correcting Conservativity Principle Violations in Ontology Alignments. OWL: Experiences and Directions Workshop 2014 (OWLED 2014). 13-24.

[4] Ondřej Zamazal, Vojtěch Svátek. The Ten-Year OntoFarm and its Fertilization within the Onto-Sphere. Web Semantics: Science, Services and Agents on the World Wide Web, 43, 46-53. 2017.