The growth of the ontology alignment area in the past ten years has led to the development of many ontology alignment tools. After several years of experience in the OAEI, we observed that the results only slightly improved in terms of the alignment quality (precision/recall resp. F-measure). Based on this insight, it is clear that fully automatic ontology matching approaches slowly reach an upper bound of the alignment quality they can achieve. A work by (Jimenez-Ruiz et al., 2012) has shown that simulating user interactions with 30% error rate during the alignment process has led to the same results as non-interactive matching. Thus, in addition to the validation of the automatically generated alignments by domain experts, we believe that there is further room for improving the quality of the generated alignments by incorporating user interaction. User involvement during the matching process has been identified as one of the challenges in front of the ontology alignment community by (Shvaiko et al., 2013) and user interaction with a system is an integral part of it.
At the same time, with the tendency of increasing ontology sizes, the alignment problem also grows. It is not feasible for a user to, for instance validate all candidate mappings generated by a system, i.e., tool developers should aim at reducing unnecessary user interventions. All required efforts of the human have to be taken into account and it has to be in an appropriate proportion to the result. Thus, besides the quality of the alignment, other measures like the number of interactions are interesting and meaningful to decide which matching system is best suitable for a certain matching task. All other OAEI tracks focus on fully automatic matching and semi-automatic matching is not evaluated although such systems already exist, e.g., overview in (Ivanova et al., 2015). As long as the evaluation of such systems is not driven forward, it is hardly possible to systematically compare the quality of interactive matching approaches.
In this 10th edition of the Interactive track we use two OAEI datasets, namely Anatomy and Conference. The Anatomy dataset includes two ontologies (1 task), the Adult Mouse Anatomy (AMA) ontology and a part of the National Cancer Institute Thesaurus (NCI) describing the human anatomy. The Conference dataset covers 16 ontologies describing the domain of conference organization. We only use the test cases for which an alignment is publicly available (altogether 21 alignments/tasks).
The quality of the generated alignments in the Anatomy and Conference tracks has been constantly increasing but in most cases only by a small amount (by a few percentage points). For example, in the Conference track in 2013, the best system according to F-measure was YAM++ with 70% (Cuenca Grau et al., 2013). On the other hand, while the best result according to F-measure for the Anatomy track was achieved last year by AML with 94.6% there has been very little improvement over previous few campaigns (only a few percentage points). This shows that there is room for improvement, which could be filled by interactive means.
The interactive matching track was organized at OAEI 2022 for the tenth time. The goal of this evaluation is to simulate interactive matching (see (Paulheim et al., 2013), (Dragisic et al., 2016) and (Li et al., 2020)), where a human expert is involved to validate mappings found by the matching system. In the evaluation, we look at how interacting with the user improves the matching results. The SEALS client was modified to allow interactive matchers to ask an Oracle. The interactive matcher can present a correspondence to the Oracle, which then tells the system whether the correspondence is right or wrong. Since 2016, we have extended this functionality - a matcher can present simultaneously several correspondences to the Oracle. Since 2015, in addition to emulating the perfect user, we have considered domain experts with variable error rates which reflects a more realistic scenario where a (simulated) user does not necessarily provide a correct answer. We experiment with three different error rates, 0.1, 0.2 and 0.3. The errors were randomly introduced into the reference alignment with given rates.
Each system was run ten times and the final result of a system for each error rate represents the average of these runs. This is the same configuration that was used in the non-interactive version of the Anatomy track and runtimes in the interactive version of this track are therefore comparable. For the Conference dataset with the ra1 alignment, we considered macro-average of precision and recall of different ontology pairs, while the number of interactions represents the total number of interactions in all tasks. Finally, the ten runs are averaged.
|Measure||Description||Meaning for the evaluation|
|Precision, Recall, F-measure||Measure the tool's performance against the fixed reference alignment.||Show in absolute terms how the tool performed in the matching problem.|
|Precision Oracle, Recall Oracle and F-measure Oracle||Measure the tool's performance against the reference as modified by the Oracle's errors.||Show how the tool is impacted by the errors.|
|Precision and Negative Precision||Measure the performance of the Oracle itself.||Show what type of errors the Oracle made, and thus explain the performance of the tool when faced with these errors.|
The four tables below present the results for the Anatomy dataset with four different error rates. The first two columns in each of the tables present the run time (in seconds) and the size of the alignment. The next eight columns present the precision, recall, F-measure and recall+ obtained from the interactive and non-interactive tracks for the Anatomy dataset. The measure recall+ indicates the amount of detected non-trivial equivalence correspondences. To calculate it, the trivial correspondences (those with the same normalized label including those in the oboInOwlnamespace as well) have been removed as well as correspondences expressing relations different from equivalence. The meaning of the other columns is described above at the beginning of the Results section. Fig. 1 shows the time intervals between the questions to the user/Oracle for the different systems and error rates for the ten runs (the runs are depicted with different colors).
We first compare the performance of the two systems with an all-knowing Oracle (0.0 error rate), in terms of precision, recall, F-measure and recall+, to the results obtained in the non-interactive Anatomy track ("Precision", "Recall", "F-measure", "Recall+", "Precision Non Inter", "Recall Non Inter", "F-measure Non Inter" and "Recall+ Non Inter"). The effect of introducing interactions with the Oracle/user is mostly pronounced for the ALIN system and especially for its recall measure - ALIN has a 19.4 percentage points increase whereas LogMap's recall decreases a little bit. User interactions affect the precision measure differently - it brings increases for all systems, and it is most advantageous for LogMap which precision increases with around 7.3 percentage points. Consequently, ALIN's F-measure increases the most (from 0.835 to 0.952), the F-measure for LogMap slightly changes. It is worth noting that ALIN detects more trivial correspondences in the non-interactive track ("Recall+ Non Inter" = 0.438), introducing user interactions led to detecting some non-trivial correspondences ("Recall+" = 0.787). Thus it seems that ALIN is slightly relying on user interaction to generate non-trivial mappings. In terms of the alignment size and the number of total requests, ALIN and LogMap show slightly similar results in which ALIN generates 1413 correspondences in 579 requests (2.44 times), and LogMap generates 1299 correspondences in 388 requests (3.34 times).
With the introduction of an erroneous Oracle/user and moving towards higher error rates, the systems' performances start to deteriorate in comparison to the all-knowing Oracle slightly. We can observe that the change in the error rate influences the systems differently in comparison to the non-interactive results. LogMap's performance with an all-knowing Oracle is better on all measures. The F-measure in the 0.3 case drops below the non-interactive results, while the precision stays higher in all error rates. ALIN's recall is higher for all error rates in comparison to its non-interactive results. F-measure stays higher than the non-interactive results for all error rates except for 0.3 and its precision drops already in the 0.1 case. Overall the F-measure drops with about 12.5 percentage points for ALIN from error rate 0.0 to 0.3 and with around 3.9 percentage points for LogMap. One observation from the recall+ in the interactive and non-interactive tracks, ALIN has higher numbers in the interactive track for all error rates which means they generated more non-trivial mappings. Now let us examine how the precision and recall are affected when moving from error rate 0.0 to 0.3 in connection to the values in the last two columns - "Precision" and "Negative Precision". The "Precision" value drops more than the value for "Negative Precision" for ALIN and LogMap. ALIN has 65.4 percentage points and 8.6 percentage points drop for "Precision" and "Negative Precision" respectively. While LogMap drops 56.1 and 12. Taking into account that the "Precision" impacts precision and "Negative Precision" impacts recall we would expect that the precision drops more than recall for ALIN and LogMap. This is actually observed - ALIN's precision and recall drop with respectively 19.4 percentage points and 5.5 percentage points while LogMap's - with around 5 and 2.9 percentage points. Another observation is connected to the values in the "Precision Oracle" and "Recall Oracle" columns. ALIN's values are approximately constant, which means that the impact of the errors is linear. LogMap, as the last year, has a noticeable drop in both "Precision Oracle" and "Recall Oracle" which indicates a supralinear impact of errors.
The size of the alignments produced by LogMap and ALIN both slightly increases with the increasing error rates. The numbers of total requests and distinct mappings for ALIN decrease from 0.0 to 0.3. The two numbers for LogMap are same through the different error rates. Another difference is observed in the ratio of correct to incorrect requests made to the Oracle ("True Positives" and "True Negatives"). In the case of an all-knowing Oracle, all the systems make more incorrect than correct requests. The ratios of correct and incorrect requests of ALIN and LogMap are 0.21, 0.327 respectively. AML, LogMap and ALIN keep almost the same ratios in different error rates.
For an interactive system, the time intervals at which the user is involved in an interaction are important. Fig. 1 presents a comparison between the systems regarding the time periods at which the system presents a question to the user. Across the ten runs and different error rates, the LogMap mean requests intervals are around 1 and 0 ms. The average of ALIN's mean intervals is 39 ms. While there are no significant deviations of this value for LogMap, several outliers are observed for ALIN. For the run time between the different error rates, there is no significant change for LogMap and ALIN. LogMap generates alignments in less than a minute.
The take away of the above analysis is that the two systems perform better with an all-knowing Oracle than in the non-interactive Anatomy track in terms of F-measure. For ALIN, all three measures of the interactive track are higher in 0.0 error-rate than the non-interactive one. For LogMap the recall in the 0.0 case is almost same as its non-interactive recall and drops under it in the 0.1 error rate. The growth of the error rate impacts different measures in the different systems. The impact of the Oracle errors is linear for ALIN and supralinear for LogMap. The ratio of correct to incorrect questions to the Oracle is different in the three systems and further changes with the increasing error rates which could be an indicator of different strategies to handle user errors.
LogMap has participated in the track since it has been established in 2013. ALIN has participated since 2016. LogMap shows similar results to the results of 2021. For ALIN, there is a slightly increasing trend in terms of recall, F-measure, recall+, the number of requests and the number of distinct mappings but a decrease in Precision with the introduction of an erroneous Oracle. ALIN has a strategy to standardize the terms with grammars, which contributes to the increase on the number of distinct mappings. Similarly to the last year, the impact of the Oracle's errors is linear for ALIN and supralinear for LogMap.
The four tables below present the results for the Conference dataset with four different error rates. The columns are described above at the beginning of the Results section. Fig. 2 shows the average requests intervals per task (21 tasks in total per run) between the questions to the user/Oracle for the different systems and error rates for all tasks and the ten runs (the runs are depicted with different colors). The first number under the system name is the average number of requests and the second number is the average period of the average requests intervals for all tasks and runs.
When systems are evaluated using an all-knowing Oracle, all the participants present a higher F-measure than the one they obtained non-interactively. ALIN shows the greatest improvement, with around 22.3 percentage points increase in F-measure, and also the highest F-measure in absolute terms when questioning the Oracle. The substantial improvement of ALIN is mostly supported by gains in recall (around 28.8 percentage points), whereas LogMap shows more balanced improvements.
When an error rate is introduced, ALIN is the system that is more severely affected, losing around 11.7, 9.3 and 6.7 percentage points in F-measure with each increase in error rate. On the other hand, LogMap is resilient, with losses between 2 percentage points and 0.2 percentage point. When comparing the systems' performance across the different error rates with their non-interactive results, we note that the precision drops earlier than the recall. ALIN's recall is higher than its non-interactive results even in the 0.3 (its relatively low recall in the non-interactive track also contributes to this result), but the precision drops already for the smallest error rate. LogMap's precision drops in the 0.3 case while its recall stays almost the same as the non-interactive results. The larger drop in "Precision" than "Negative Precision" values between the 0.0 and 0.3 error rates explains the larger decrease in precision (around 44.3, 8.7 percentage points) than the decrease in recall (around 10.8, 2.3 percentage points) for all systems (ALIN,LogMap) The impact of Oracle errors ("Precision Oracle" and "Recall Oracle" columns) on the systems' behavior for the Conference dataset is similar to the impact of the errors on their behavior for the Anatomy dataset - it is linear for ALIN and supralinear for LogMap.
The number of total requests posed by each system is different. LogMap poses the lower number (82 for every test), whereas ALIN poses around 296 total requests on average. Analysing the values for the "True Positives" and "True Negatives" we observe that the systems mainly ask the Oracle about incorrect mappings (true negatives) - in the 0.0 case the ratio is 0.11 and 0.25 for ALIN and LogMap respectively. This ratio almost stays the same for ALIN and LogMap when the error rate increases. Time between requests is low for all systems. And the total run time between the error rates are almost the same.
The take away of the above analysis is that all systems, perform better on all measures with an all-knowing Oracle than in the non-interactive Conference track. In the 0.1 error rate, LogMap still performs better on all measures, while ALIN's precision drops below its non-interactive precision. Its F-measure, however, stays higher than the non-interactive value over all error rates except error rate 0.3 due to the improvement in the recall.
The variation trends with respect to precision, recall, and F-measure in each system between interactive and non-interactive are similar. The influence due to the different error rates among these measures is also similar. Both ALIN and LogMap have similar results in comparison to the last year.
While LogMap makes use of user interactions exclusively in the post-matching steps to filter their candidate mappings, ALIN can also add new candidate mappings to its initial set. LogMap requests feedback on only selected mapping candidates (based on their similarity patterns or their involvement in unsatisfiabilities). LogMap presents three mappings to the user . ALIN also employs the new feature similar to LogMap - analysing several mappings simultaneously - and can present up to three mappings together to the user if a full entity name in a candidate mapping is the same as another entity name in another candidate mapping. The difference between ALIN and LogMap is the latter contains three questions (">", "<" and "=") for every candidate mapping.
With the all-knowing Oracle all systems, perform better than their non-interactive results for both datasets. Although systems' performance deteriorates when moving towards larger error rates there are still benefits from the user interaction - some of the systems' measures stay above their non-interactive values even for the larger error rates. For the Anatomy dataset different measures are affected in the different systems in comparison to their non-interactive results; for the Conference dataset the precision drops under the non-interactive values faster than the recall. The drop in precision and recall for all systems is larger for the Conference dataset than for the Anatomy dataset with the increasing error rates.
The impact of the Oracle's errors is linear for ALIN and supralinear for LogMap for both datasets. One difference between the two datasets is the ratio of correct to incorrect requests to the Oracle. There is a clear trend for all three systems in the Conference dataset, the ratio stays around and under 0.5 for all error rates. In the Anatomy dataset the ratio changes differently for each system.
Two models for system response times are frequently used in the literature (Dabrowski et al., 2011): Shneiderman and Seow take different approaches to categorize the response times. Shneiderman takes task-centred view and sort out the response times in four categories according to task complexity: typing, mouse movement (50-150 ms), simple frequent tasks (1 s), common tasks (2-4 s) and complex tasks (8-12 s). He suggests that the user is more tolerable to delays with the growing complexity of the task at hand. Unfortunately no clear definition is given for how to define the task complexity. Seow's model looks at the problem from a user-centred perspective by considering the user expectations towards the execution of a task: instantaneous (100-200 ms), immediate (0.5-1 s), continuous (2-5 s), captive (7-10 s); Ontology alignment is a cognitively demanding task and can fall into the third or fourth categories in both models. In this regard the response times (request intervals as we call them above) observed with both the Anatomy and Conference dataset fall into the tolerable and acceptable response times, and even into the first categories, in both models. The request intervals for LogMap are less than 1 ms for both datasets. ALIN's request intervals are 40 ms for the Anatomy dataset on average and less than 1 ms for the Conference dataset. It could be the case however that the user could not take advantage of very low response times because the task complexity may result in higher user response time (analogically it measures the time the user needs to respond to the system after the system is ready).
Ernesto Jimenez-Ruiz, Bernardo Cuenca Grau, Yujiao Zhou, Ian Horrocks. "Large-scale Interactive Ontology Matching: Algorithms and Implementation". ECAI 2012. [pdf]
Pavel Shvaiko, Jérôme Euzenat. "Ontology matching: state of the art and future challenges". Knowledge and Data Engineering 2013. [publisher page]
Valentina Ivanova, Patrick Lambrix, Johan Åberg. "Requirements for and evaluation of user support for large-scale ontology alignment". ESWC 2015. [publisher page]
Bernardo Cuenca Grau , Zlatan Dragisic , Kai Eckert & et al. "Results of the Ontology Alignment Evaluation Initiative 2013" OM 2013. [pdf]
Heiko Paulheim, Sven Hertling, Dominique Ritze. "Towards Evaluating Interactive Ontology Matching Tools". ESWC 2013. [pdf]
Zlatan Dragisic, Valentina Ivanova, Patrick Lambrix, Daniel Faria, Ernesto Jimenez-Ruiz and Catia Pesquita. "User validation in ontology alignment". ISWC 2016. [paper] [technical report]
Huanyu Li, Zlatan Dragisic, Daniel Faria, Valentina Ivanova, Ernesto Jimenez-Ruiz, Patrick Lambrix and Catia Pesquita. "User validation in ontology alignment: functional assessment and impact". The Knowledge Engineering Review, 2020. [DOI]
Jim Dabrowski, Ethan V. Munson. "40 years of searching for the best computer system response time". Interacting with Computers 2011. [publisher page]
This track is currently organized by Ying Li, Mina Abd Nikooie Pour, Huanyu Li, Daniel Faria, Ernesto Jimenez Ruiz, Patrick Lambrix, and Catia Pesquita. If you have any problems working with the ontologies or any suggestions related to this track, feel free to write an email to ying [dot] li [at] liu [dot] se
We thank Zlatan Dragisic and Valentina Ivanova for their contributions in setting up this track.
We thank Dominique Ritze and Heiko Paulheim, the organisers of the 2013 and 2014 editions of this track, who were very helpful in the setting up of the 2015 edition.
The track is partially supported by the Optique project.