Results for OAEI 2023 - Knowledge Graph Track
Matching systems
As a pre-test, we executed all systems submitted to OAEI (even if they are not registered for the track) on a very small matching example (dataset)
with a similar structure and shape like the real knowledge graphs (in fact, they are a small subset of them).
It showed that not all matching systems are able to complete this small task due to exceptions or other failures.
The following matching systems produced an exception:
- ALIN (Nullpointer - similar to last year)
- TOMATO (produced unparsable alignment files)
- GraphMatcher (dependency problem)
Thus, we executed the following systems:
- BaselineAltLabel
- BaselineLabel
- LogMapLt
- LogMap
- Matcha
- SORBETMtch
The source code for the baseline matchers is available.
The baselineLabel matcher matches all resources which share the same rdfs:label. (in case multiple resources share the same label, all of them are matched).
BaselineAltLabel is additionally using skos:altLabel. Again, in cases where multiple resources share a common label, all those resources are matched in a cross product manner.
Experimental setting
The evaluation is executed on a virtual machine(VM) with 32GB of RAM and 16 vCPUs (2.4 GHz).
The operating system is debian 9 with openjdk version "1.8.0_265".
We used the MELT toolkit for the evaluation
which internally uses the SEALS client (version 7.0.5) to execute matcher packaged with SEALS.
Matching systems which use the web packaging, are executed with the MatcherHTTPCall class.
The reported times includes the environment preparation of SEALS as well as the file upload to the docker container (the start of the container is not timed).
The alignments were evaluated based on Precision, Recall and F-Measure for classes, properties and instances (each in isolation).
Our partial gold standard consist of 1:1 mappings extracted from links contained in wiki pages (cross wiki links).
The schema was matched by ontology experts.
We assume that in each knowledge graph, only one representation of one concept exists.
This means if we have the mapping in our gold standard we can count the mapping as a false positive
(the assumption here is that in the seconds knowledge graph no similar concept to B exists).
The value of false negatives is only increased if we have a 1:1 mapping and it is not found by a matcher.
The source code for generating the evaluation results is also available.
We imposed a maximum execution time of 12h per task, however, that time limit was never exceeded.
Generated dashboard / CSV file
We also generated an online dashboard with the help of the MELT framework.
Have a look at the knowledge graph results here (it may take some seconds to load due to 200 000 correspondences).
Moreover, we also generated a CSV file which allows to analyze each matcher on a correspondence level.
This should help matcher developers to increase the matcher performance.
Alignment results
The generated alignment files are also available.
Results overview
|
|
|
class |
property |
instance |
overall |
BaselineAltLabel | 00:11:37 | 5 |
16.4 | 1.00 (1.00) | 0.71 (0.71) | 0.59 (0.59) |
47.8 | 0.99 (0.99) | 0.76 (0.76) | 0.66 (0.66) |
4674.8 | 0.89 (0.89) | 0.84 (0.84) | 0.80 (0.80) |
4739.0 | 0.89 (0.89) | 0.84 (0.84) | 0.80 (0.80) |
BaselineLabel | 00:11:27 | 5 |
16.4 | 1.00 (1.00) | 0.71 (0.71) | 0.59 (0.59) |
47.8 | 0.99 (0.99) | 0.76 (0.76) | 0.66 (0.66) |
3641.8 | 0.95 (0.95) | 0.80 (0.80) | 0.71 (0.71) |
3706.0 | 0.95 (0.95) | 0.80 (0.80) | 0.71 (0.71) |
LogMap | 00:56:43 | 5 |
19.4 | 0.93 (0.93) | 0.80 (0.80) | 0.71 (0.71) |
0.0 | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
4012.4 | 0.90 (0.90) | 0.78 (0.78) | 0.69 (0.69) |
4031.8 | 0.90 (0.90) | 0.77 (0.77) | 0.68 (0.68) |
LogMapLt | 64:48:07 | 4 |
23.0 | 0.80 (1.00) | 0.55 (0.69) | 0.43 (0.54) |
0.0 | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
6653.8 | 0.73 (0.91) | 0.67 (0.84) | 0.62 (0.78) |
6676.8 | 0.73 (0.92) | 0.66 (0.83) | 0.61 (0.76) |
LSMatch | 04:47:07 | 5 |
23.6 | 0.97 (0.97) | 0.74 (0.74) | 0.64 (0.64) |
85.6 | 0.73 (0.73) | 0.71 (0.71) | 0.69 (0.69) |
5872.2 | 0.66 (0.66) | 0.59 (0.59) | 0.60 (0.60) |
5981.4 | 0.66 (0.66) | 0.60 (0.60) | 0.61 (0.61) |
Matcha | 14:30:03 | 5 |
0.0 | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
0.0 | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
263822.2 | 0.55 (0.55) | 0.63 (0.63) | 0.86 (0.86) |
263822.2 | 0.55 (0.55) | 0.62 (0.62) | 0.84 (0.84) |
OLaLa | 02:55:06 | 5 |
18.6 | 0.98 (0.98) | 0.68 (0.68) | 0.53 (0.53) |
73.6 | 0.86 (0.86) | 0.83 (0.83) | 0.81 (0.81) |
0.0 | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
92.2 | 0.88 (0.88) | 0.03 (0.03) | 0.02 (0.02) |
SORBETMtch | 00:21:53 | 5 |
22.4 | 0.93 (0.93) | 0.80 (0.80) | 0.73 (0.73) |
0.0 | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
0.0 | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
22.4 | 0.93 (0.93) | 0.01 (0.01) | 0.00 (0.00) |
Aggregated results per matcher, divided into class, property, instance, and overall alignments.
Time is displayed as HH:MM:SS. Column #testcases indicates the number of testcases where the tool is able to generate (non empty) alignments.
Column size indicates the averaged number of system correspondences.
Two kinds of results are reported: (1) those not distinguishing empty and erroneous (or not generated) alignments,
and (2) those considering only non empty alignments (value between parenthesis).
Test case specific results
Overall results
This result table shows the overall performance (without dividing into class, property or instance) of the matchers for each test case.
|
marvelcinematicuniverse-marvel |
memoryalpha-memorybeta |
memoryalpha-stexpanded |
starwars-swg |
starwars-swtor |
BaselineAltLabel |
2574 | 0.86 | 0.76 | 0.68 |
13514 | 0.88 | 0.89 | 0.89 |
3230 | 0.88 | 0.90 | 0.92 |
1712 | 0.92 | 0.74 | 0.63 |
2665 | 0.92 | 0.91 | 0.90 |
BaselineLabel |
1879 | 0.90 | 0.69 | 0.56 |
10552 | 0.95 | 0.85 | 0.77 |
2582 | 0.98 | 0.90 | 0.83 |
1245 | 0.96 | 0.68 | 0.53 |
2272 | 0.95 | 0.89 | 0.84 |
LogMap |
2255 | 0.84 | 0.59 | 0.46 |
11648 | 0.89 | 0.82 | 0.76 |
2491 | 0.88 | 0.81 | 0.75 |
1577 | 0.94 | 0.79 | 0.68 |
2188 | 0.94 | 0.84 | 0.75 |
LogMapLt |
0 | 0.00 | 0.00 | 0.00 |
16688 | 0.90 | 0.83 | 0.77 |
3577 | 0.94 | 0.88 | 0.82 |
2807 | 0.91 | 0.74 | 0.62 |
3635 | 0.91 | 0.87 | 0.83 |
LSMatch |
2147 | 0.63 | 0.50 | 0.42 |
19073 | 0.59 | 0.66 | 0.75 |
5065 | 0.53 | 0.63 | 0.79 |
888 | 0.76 | 0.37 | 0.24 |
2734 | 0.81 | 0.82 | 0.82 |
Matcha |
1193676 | 0.05 | 0.09 | 0.69 |
69224 | 0.57 | 0.70 | 0.90 |
13597 | 0.67 | 0.78 | 0.93 |
18631 | 0.72 | 0.75 | 0.79 |
23983 | 0.74 | 0.80 | 0.89 |
OLaLa |
34 | 0.82 | 0.01 | 0.01 |
127 | 0.91 | 0.01 | 0.01 |
104 | 0.93 | 0.05 | 0.02 |
61 | 0.86 | 0.03 | 0.02 |
135 | 0.85 | 0.07 | 0.04 |
SORBETMtch |
9 | 1.00 | 0.00 | 0.00 |
29 | 0.83 | 0.00 | 0.00 |
31 | 0.89 | 0.01 | 0.00 |
13 | 1.00 | 0.01 | 0.00 |
30 | 0.93 | 0.02 | 0.01 |
Class results
|
marvelcinematicuniverse-marvel |
memoryalpha-memorybeta |
memoryalpha-stexpanded |
starwars-swg |
starwars-swtor |
BaselineAltLabel |
8 | 1.00 | 1.00 | 1.00 |
19 | 1.00 | 0.44 | 0.29 |
19 | 1.00 | 0.63 | 0.46 |
9 | 1.00 | 0.57 | 0.40 |
27 | 1.00 | 0.89 | 0.80 |
BaselineLabel |
8 | 1.00 | 1.00 | 1.00 |
19 | 1.00 | 0.44 | 0.29 |
19 | 1.00 | 0.63 | 0.46 |
9 | 1.00 | 0.57 | 0.40 |
27 | 1.00 | 0.89 | 0.80 |
LogMap |
10 | 1.00 | 1.00 | 1.00 |
21 | 0.88 | 0.64 | 0.50 |
26 | 0.78 | 0.64 | 0.54 |
12 | 1.00 | 0.89 | 0.80 |
28 | 1.00 | 0.85 | 0.73 |
LogMapLt |
0 | 0.00 | 0.00 | 0.00 |
23 | 1.00 | 0.44 | 0.29 |
27 | 1.00 | 0.70 | 0.54 |
12 | 1.00 | 0.75 | 0.60 |
30 | 1.00 | 0.85 | 0.73 |
LSMatch |
8 | 1.00 | 1.00 | 1.00 |
26 | 1.00 | 0.44 | 0.29 |
25 | 1.00 | 0.70 | 0.54 |
19 | 1.00 | 0.75 | 0.60 |
40 | 0.86 | 0.83 | 0.80 |
Matcha |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
OLaLa |
10 | 1.00 | 0.67 | 0.50 |
22 | 1.00 | 0.53 | 0.36 |
26 | 1.00 | 0.70 | 0.54 |
10 | 1.00 | 0.75 | 0.60 |
25 | 0.91 | 0.77 | 0.67 |
SORBETMtch |
9 | 1.00 | 1.00 | 1.00 |
29 | 0.83 | 0.50 | 0.36 |
31 | 0.89 | 0.73 | 0.62 |
13 | 1.00 | 0.89 | 0.80 |
30 | 0.93 | 0.90 | 0.87 |
Property results
|
marvelcinematicuniverse-marvel |
memoryalpha-memorybeta |
memoryalpha-stexpanded |
starwars-swg |
starwars-swtor |
BaselineAltLabel |
7 | 1.00 | 0.53 | 0.36 |
41 | 1.00 | 0.51 | 0.34 |
46 | 0.97 | 0.80 | 0.68 |
42 | 1.00 | 1.00 | 1.00 |
103 | 1.00 | 0.94 | 0.89 |
BaselineLabel |
7 | 1.00 | 0.53 | 0.36 |
41 | 1.00 | 0.51 | 0.34 |
46 | 0.97 | 0.80 | 0.68 |
42 | 1.00 | 1.00 | 1.00 |
103 | 1.00 | 0.94 | 0.89 |
LogMap |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
LogMapLt |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
LSMatch |
36 | 0.82 | 0.82 | 0.82 |
112 | 0.62 | 0.60 | 0.58 |
82 | 0.62 | 0.62 | 0.61 |
79 | 0.72 | 0.68 | 0.65 |
119 | 0.88 | 0.83 | 0.79 |
Matcha |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
OLaLa |
24 | 0.80 | 0.76 | 0.73 |
105 | 0.90 | 0.90 | 0.89 |
78 | 0.92 | 0.90 | 0.88 |
51 | 0.84 | 0.82 | 0.80 |
110 | 0.84 | 0.79 | 0.75 |
SORBETMtch |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
Instance results
|
marvelcinematicuniverse-marvel |
memoryalpha-memorybeta |
memoryalpha-stexpanded |
starwars-swg |
starwars-swtor |
BaselineAltLabel |
2559 | 0.86 | 0.76 | 0.68 |
13454 | 0.88 | 0.89 | 0.89 |
3165 | 0.88 | 0.90 | 0.93 |
1661 | 0.92 | 0.74 | 0.62 |
2535 | 0.92 | 0.91 | 0.90 |
BaselineLabel |
1864 | 0.90 | 0.69 | 0.56 |
10492 | 0.95 | 0.85 | 0.77 |
2517 | 0.98 | 0.91 | 0.84 |
1194 | 0.95 | 0.67 | 0.52 |
2142 | 0.95 | 0.89 | 0.84 |
LogMap |
2245 | 0.84 | 0.60 | 0.46 |
11627 | 0.89 | 0.82 | 0.76 |
2465 | 0.88 | 0.82 | 0.77 |
1565 | 0.94 | 0.80 | 0.69 |
2160 | 0.94 | 0.86 | 0.78 |
LogMapLt |
0 | 0.00 | 0.00 | 0.00 |
16665 | 0.90 | 0.83 | 0.77 |
3550 | 0.94 | 0.89 | 0.84 |
2795 | 0.91 | 0.75 | 0.63 |
3605 | 0.91 | 0.89 | 0.87 |
LSMatch |
2103 | 0.63 | 0.50 | 0.41 |
18935 | 0.59 | 0.66 | 0.75 |
4958 | 0.53 | 0.63 | 0.80 |
790 | 0.76 | 0.36 | 0.23 |
2575 | 0.81 | 0.82 | 0.82 |
Matcha |
1193676 | 0.05 | 0.09 | 0.70 |
69224 | 0.57 | 0.70 | 0.90 |
13597 | 0.67 | 0.79 | 0.96 |
18631 | 0.72 | 0.76 | 0.81 |
23983 | 0.74 | 0.82 | 0.93 |
OLaLa |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
SORBETMtch |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
0 | 0.00 | 0.00 | 0.00 |
Runtime
|
marvelcinematicuniverse-marvel |
memoryalpha-memorybeta |
memoryalpha-stexpanded |
starwars-swg |
starwars-swtor |
BaselineAltLabel |
00:02:45 |
00:01:54 |
00:01:11 |
00:02:54 |
00:02:50 |
BaselineLabel |
00:02:40 |
00:01:50 |
00:01:11 |
00:02:52 |
00:02:51 |
LogMap |
00:31:16 |
00:05:46 |
00:03:33 |
00:08:20 |
00:07:46 |
LogMapLt |
00:00:00 |
14:06:00 |
09:46:10 |
20:50:04 |
20:05:53 |
LSMatch |
02:01:17 |
01:03:51 |
00:22:38 |
00:42:18 |
00:37:00 |
Matcha |
12:16:32 |
00:58:47 |
00:15:35 |
00:34:54 |
00:24:13 |
OLaLa |
00:19:07 |
00:36:16 |
00:31:09 |
00:42:06 |
00:46:26 |
SORBETMtch |
00:04:54 |
00:03:42 |
00:02:57 |
00:05:07 |
00:05:12 |
Organizers
- Sven Hertling (University of Mannheim, Germany), main contact for the track, sven at informatik dot uni-mannheim dot de
- Heiko Paulheim (University of Mannheim, Germany)
[1] Sven Hertling, Heiko Paulheim: The knowledge graph track at OAEI : Gold standards, baselines, and the golden hammer bias. ESWC 2020. [pdf]
[2] Sven Hertling, Heiko Paulheim: DBkWik: A Consolidated Knowledge Graph from Thousands of Wikis. International Conference on Big Knowledge 2018. [pdf]
[3] Alexandra Hofmann, Samresh Perchani, Jan Portisch, Sven Hertling, and Heiko Paulheim. DBkWik: Towards Knowledge Graph Creation from Thousands of Wikis. International Semantic Web Conference (Posters & Demos) 2017. [pdf]