Ontology Alignment Evaluation Initiative - OAEI-2021 Campaign OAEI

Results for OAEI 2021 - Common Knowledge Graphs Track

Experiment Settings

The evaluation have been executed on a Linux virtual machine with 128 GB of RAM and 16 vCPUs (2.4 GHz) processors. Precision, Recall and F-measure have been computed with respect to the reference alignment for this task. The gold standard for this task is only a partial gold standard [1]. Therefore, to avoid over-penalising systems that may discover reasonable matches that are not coded in the gold standard, we ignore any predicted matches if neither of the classes in that pair is present as a true positive pair with another class in the goldtandard.

Generated alignments

The code used to generate the presented results is avilable here. The alignment files resulted from all of the evaluated smatchers are also avilable to download here.

Participation and Success

We have evaluated all the 17 participated systems on the task of mapping NELL and DBpedia classes. The following matching systems produced an exception: The following matching systems have completed the task but produced empty alignment files: The following matchers were not able to complete the matching task within a 12 hours timeout: Therefore, our results only include matchers who were able to complete the task with a non-empty alignment file and within the 12 hours timeout, which are: We have also evaluated a simple string based baseline matcher which calculates the similarity between class labels in order to generate candidate matching classes. The baseline we utilize is the SimpleStringMatcher available through Matching EvaLuation Toolkit (MELT) and the source code can be found here.

Results

All of the 9 evaluated matchers were able to finish the task with non-empty alignment files. The table below shows the aggregated results for all the participated systems. The size column indicates the total number of class alignments discovered by each matcher. While the majority of the matchers discovered alignments at both schema and instance levels, we have only evaluated class alignments, as the gold standard does not include instance level ground truth. AML is the only matcher that only discovered instance candidate alignments. Seven matchers were able to outperform the basic label matcher. In terms of runtime, the table below presents the run time as HH:MM:SS where we can observe that all matchers were able to finish the task in less than 20 minutes except for KGMatcher which required approximately two hours to finish the task. The shortest runtime was observed with LogMap with less than 4 minutes.

Matcher Alignment Size Precision Recall F1 measure Time
AML 0 0.00 0.00 0.00 00:05:19
LogMap 105 0.99 0.80 0.88 00:03:19
ALOD2Vec 103 1.00 0.80 0.89 00:04:13
OTMapOnto 123 0.90 0.84 0.87 00:08:16
KGMatcher 122 0.97 0.91 0.94 01:55:35
Wiktionary 103 1.00 0.80 0.89 00:04:32
AMD 101 0.00 0.00 0.00 00:18:27
ATmatcher 104 1.00 0.80 0.89 00:03:16
LsMatch 102 0.99 0.78 0.87 00:16:45
BaselineMatcher 78 1.00 0.60 0.75 00:00:37

Organizers

This track is organized by:

For any questions or suggestions about the track please email: oafallatah1 at sheffield dot ac dot uk

References

[1] Fallatah, O., Zhang, Z., Hopfgartner, F. A gold standard dataset for large knowledge graphs matching, Proceedings of the 15th Ontology Matching workshop (2020). [pdf]