Fostering User Involvement in Ontology Alignment and Alignment Evaluation

Fostering User Involvement in Ontology Alignment and Alignment Evaluation PDF Author: Valentina Ivanova
Publisher: Linköping University Electronic Press
ISBN: 9176854035
Category :
Languages : en
Pages : 73

Get Book

Book Description
The abundance of data at our disposal empowers data-driven applications and decision making. The knowledge captured in the data, however, has not been utilized to full potential, as it is only accessible to human interpretation and data are distributed in heterogeneous repositories. Ontologies are a key technology unlocking the knowledge in the data by providing means to model the world around us and infer knowledge implicitly captured in the data. As data are hosted by independent organizations we often need to use several ontologies and discover the relationships between them in order to support data and knowledge transfer. Broadly speaking, while ontologies provide formal representations and thus the basis, ontology alignment supplies integration techniques and thus the means to turn the data kept in distributed, heterogeneous repositories into valuable knowledge. While many automatic approaches for creating alignments have already been developed, user input is still required for obtaining the highest-quality alignments. This thesis focuses on supporting users during the cognitively intensive alignment process and makes several contributions. We have identified front- and back-end system features that foster user involvement during the alignment process and have investigated their support in existing systems by user interface evaluations and literature studies. We have further narrowed down our investigation to features in connection to the, arguably, most cognitively demanding task from the users’ perspective—manual validation—and have also considered the level of user expertise by assessing the impact of user errors on alignments’ quality. As developing and aligning ontologies is an error-prone task, we have focused on the benefits of the integration of ontology alignment and debugging. We have enabled interactive comparative exploration and evaluation of multiple alignments at different levels of detail by developing a dedicated visual environment—Alignment Cubes—which allows for alignments’ evaluation even in the absence of reference alignments. Inspired by the latest technological advances we have investigated and identified three promising directions for the application of large, high-resolution displays in the field: improving the navigation in the ontologies and their alignments, supporting reasoning and collaboration between users.

Fostering User Involvement in Ontology Alignment and Alignment Evaluation

Fostering User Involvement in Ontology Alignment and Alignment Evaluation PDF Author: Valentina Ivanova
Publisher: Linköping University Electronic Press
ISBN: 9176854035
Category :
Languages : en
Pages : 73

Get Book

Book Description
The abundance of data at our disposal empowers data-driven applications and decision making. The knowledge captured in the data, however, has not been utilized to full potential, as it is only accessible to human interpretation and data are distributed in heterogeneous repositories. Ontologies are a key technology unlocking the knowledge in the data by providing means to model the world around us and infer knowledge implicitly captured in the data. As data are hosted by independent organizations we often need to use several ontologies and discover the relationships between them in order to support data and knowledge transfer. Broadly speaking, while ontologies provide formal representations and thus the basis, ontology alignment supplies integration techniques and thus the means to turn the data kept in distributed, heterogeneous repositories into valuable knowledge. While many automatic approaches for creating alignments have already been developed, user input is still required for obtaining the highest-quality alignments. This thesis focuses on supporting users during the cognitively intensive alignment process and makes several contributions. We have identified front- and back-end system features that foster user involvement during the alignment process and have investigated their support in existing systems by user interface evaluations and literature studies. We have further narrowed down our investigation to features in connection to the, arguably, most cognitively demanding task from the users’ perspective—manual validation—and have also considered the level of user expertise by assessing the impact of user errors on alignments’ quality. As developing and aligning ontologies is an error-prone task, we have focused on the benefits of the integration of ontology alignment and debugging. We have enabled interactive comparative exploration and evaluation of multiple alignments at different levels of detail by developing a dedicated visual environment—Alignment Cubes—which allows for alignments’ evaluation even in the absence of reference alignments. Inspired by the latest technological advances we have investigated and identified three promising directions for the application of large, high-resolution displays in the field: improving the navigation in the ontologies and their alignments, supporting reasoning and collaboration between users.

Fostering User Involvement in Ontology Alignment and Alignment Evaluation

Fostering User Involvement in Ontology Alignment and Alignment Evaluation PDF Author: Valentina Ivanova
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book

Book Description
The abundance of data at our disposal empowers data-driven applications and decision making. The knowledge captured in the data, however, has not been utilized to full potential, as it is only accessible to human interpretation and data are distributed in heterogeneous repositories. Ontologies are a key technology unlocking the knowledge in the data by providing means to model the world around us and infer knowledge implicitly captured in the data. As data are hosted by independent organizations we often need to use several ontologies and discover the relationships between them in order to support data and knowledge transfer. Broadly speaking, while ontologies provide formal representations and thus the basis, ontology alignment supplies integration techniques and thus the means to turn the data kept in distributed, heterogeneous repositories into valuable knowledge. While many automatic approaches for creating alignments have already been developed, user input is still required for obtaining the highest-quality alignments. This thesis focuses on supporting users during the cognitively intensive alignment process and makes several contributions. We have identified front- and back-end system features that foster user involvement during the alignment process and have investigated their support in existing systems by user interface evaluations and literature studies. We have further narrowed down our investigation to features in connection to the, arguably, most cognitively demanding task from the users’ perspective—manual validation—and have also considered the level of user expertise by assessing the impact of user errors on alignments’ quality. As developing and aligning ontologies is an error-prone task, we have focused on the benefits of the integration of ontology alignment and debugging. We have enabled interactive comparative exploration and evaluation of multiple alignments at different levels of detail by developing a dedicated visual environment— Alignment Cubes —which allows for alignments’ evaluation even in the absence of reference alignments. Inspired by the latest technological advances we have investigated and identified three promising directions for the application of large, high-resolution displays in the field: improving the navigation in the ontologies and their alignments, supporting reasoning and collaboration between users.

The Semantic Web – ISWC 2016

The Semantic Web – ISWC 2016 PDF Author: Paul Groth
Publisher: Springer
ISBN: 3319465236
Category : Computers
Languages : en
Pages : 672

Get Book

Book Description
The two-volume set LNCS 9981 and 9982 constitutes the refereed proceedings of the 15th International Semantic Web Conference, ISWC 2016, which was held in Kobe, Japan, in October 2016. The 75 full papers presented in these proceedings were carefully reviewed and selected from 326 submissions. The International Semantic Web Conference is the premier forum for Semantic Web research, where cutting edge scientific results and technological innovations are presented, where problems and solutions are discussed, and where the future of this vision is being developed. It brings together specialists in fields such as artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, human-computer interaction, natural language processing, and the social sciences. The Research Track solicited novel and significant research contributions addressing theoretical, analytical, empirical, and practical aspects of the Semantic Web. The Applications Track solicited submissions exploring the benefits and challenges of applying semantic technologies in concrete, practical applications, in contexts ranging from industry to government and science. The newly introduced Resources Track sought submissions providing a concise and clear description of a resource and its (expected) usage. Traditional resources include ontologies, vocabularies, datasets, benchmarks and replication studies, services and software. Besides more established types of resources, the track solicited submissions of new types of resources such as ontology design patterns, crowdsourcing task designs, workflows, methodologies, and protocols and measures.

The Semantic Web. Latest Advances and New Domains

The Semantic Web. Latest Advances and New Domains PDF Author: Fabien Gandon
Publisher: Springer
ISBN: 3319188186
Category : Computers
Languages : en
Pages : 830

Get Book

Book Description
This book constitutes the refereed proceedings of the 12th Extended Semantic Web Conference, ESWC 2014, held in Anissaras, Portoroz, Slovenia, in May/June 2015. The 43 revised full papers presented together with three invited talks were carefully reviewed and selected from 164 submissions. This program was completed by a demonstration and poster session, in which researchers had the chance to present their latest results and advances in the form of live demos. In addition, the PhD Symposium program included 12 contributions, selected out of 16 submissions. The core tracks of the research conference were complemented with new tracks focusing on linking machine and human computation at web scale (cognition and Semantic Web, Human Computation and Crowdsourcing) beside the following subjects Vocabularies, Schemas, Ontologies, Reasoning, Linked Data, Semantic Web and Web Science, Semantic Data Management, Big data, Scalability, Natural Language Processing and Information Retrieval, Machine Learning, Mobile Web, Internet of Things and Semantic Streams, Services, Web APIs and the Web of Things, Cognition and Semantic Web, Human Computation and Crowdsourcing and In-Use Industrial Track as well.

Studying Simulations with Distributed Cognition

Studying Simulations with Distributed Cognition PDF Author: Jonas Rybing
Publisher: Linköping University Electronic Press
ISBN: 9176853489
Category :
Languages : en
Pages : 94

Get Book

Book Description
Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.

System-Level Design of GPU-Based Embedded Systems

System-Level Design of GPU-Based Embedded Systems PDF Author: Arian Maghazeh
Publisher: Linköping University Electronic Press
ISBN: 9176851753
Category :
Languages : en
Pages : 62

Get Book

Book Description
Modern embedded systems deploy several hardware accelerators, in a heterogeneous manner, to deliver high-performance computing. Among such devices, graphics processing units (GPUs) have earned a prominent position by virtue of their immense computing power. However, a system design that relies on sheer throughput of GPUs is often incapable of satisfying the strict power- and time-related constraints faced by the embedded systems. This thesis presents several system-level software techniques to optimize the design of GPU-based embedded systems under various graphics and non-graphics applications. As compared to the conventional application-level optimizations, the system-wide view of our proposed techniques brings about several advantages: First, it allows for fully incorporating the limitations and requirements of the various system parts in the design process. Second, it can unveil optimization opportunities through exposing the information flow between the processing components. Third, the techniques are generally applicable to a wide range of applications with similar characteristics. In addition, multiple system-level techniques can be combined together or with application-level techniques to further improve the performance. We begin by studying some of the unique attributes of GPU-based embedded systems and discussing several factors that distinguish the design of these systems from that of the conventional high-end GPU-based systems. We then proceed to develop two techniques that address an important challenge in the design of GPU-based embedded systems from different perspectives. The challenge arises from the fact that GPUs require a large amount of workload to be present at runtime in order to deliver a high throughput. However, for some embedded applications, collecting large batches of input data requires an unacceptable waiting time, prompting a trade-off between throughput and latency. We also develop an optimization technique for GPU-based applications to address the memory bottleneck issue by utilizing the GPU L2 cache to shorten data access time. Moreover, in the area of graphics applications, and in particular with a focus on mobile games, we propose a power management scheme to reduce the GPU power consumption by dynamically adjusting the display resolution, while considering the user's visual perception at various resolutions. We also discuss the collective impact of the proposed techniques in tackling the design challenges of emerging complex systems. The proposed techniques are assessed by real-life experimentations on GPU-based hardware platforms, which demonstrate the superior performance of our approaches as compared to the state-of-the-art techniques.

Knowledge Engineering and Knowledge Management

Knowledge Engineering and Knowledge Management PDF Author: Catherine Faron Zucker
Publisher: Springer
ISBN: 3030036677
Category : Computers
Languages : en
Pages : 586

Get Book

Book Description
This book constitutes the refereed proceedings of the 21th International Conference on Knowledge Engineering and Knowledge Management, EKAW 2018, held in Nancy, France, in November 2018. The 36 full papers presented were carefully reviewed and selected from 104 submissions. The papers cover all aspects of eliciting, acquiring, modeling, and managing knowledge, the construction of knowledge-intensive systems and services for the Semantic Web, knowledge management, e-business, natural language processing, intelligent information integration, personal digital assistance systems, and a variety of other related topics. A special focus was on "Knowledge and AI", i.e. papers describing algorithms, tools, methodologies, and applications that exploit the interplay between knowledge and Artificial Intelligence techniques, with a special emphasis on knowledge discovery.

Completion of Ontologies and Ontology Networks

Completion of Ontologies and Ontology Networks PDF Author: Zlatan Dragisic
Publisher: Linköping University Electronic Press
ISBN: 9176855228
Category : Computers
Languages : en
Pages : 65

Get Book

Book Description
The World Wide Web contains large amounts of data, and in most cases this data has no explicit structure. The lack of structure makes it difficult for automated agents to understand and use such data. A step towards a more structured World Wide Web is the Semantic Web, which aims at introducing semantics to data on the World Wide Web. One of the key technologies in this endeavour are ontologies, which provide a means for modeling a domain of interest and are used for search and integration of data. In recent years many ontologies have been developed. To be able to use multiple ontologies it is necessary to align them, i.e., find inter-ontology relationships. However, developing and aligning ontologies is not an easy task and it is often the case that ontologies and their alignments are incorrect and incomplete. This can be a problem for semantically-enabled applications. Incorrect and incomplete ontologies and alignments directly influence the quality of the results of such applications, as wrong results can be returned and correct results can be missed. This thesis focuses on the problem of completing ontologies and ontology networks. The contributions of the thesis are threefold. First, we address the issue of completing the is-a structure and alignment in ontologies and ontology networks. We have formalized the problem of completing the is-a structure in ontologies as an abductive reasoning problem and developed algorithms as well as systems for dealing with the problem. With respect to the completion of alignments, we have studied system performance in the Ontology Alignment Evaluation Initiative, a yearly evaluation campaign for ontology alignment systems. We have also addressed the scalability of ontology matching, which is one of the current challenges, by developing an approach for reducing the search space when generating the alignment.Second, high quality completion requires user involvement. As users' time and effort are a limited resource we address the issue of limiting and facilitating user interaction in the completion process. We have conducted a broad study of state-of-the-art ontology alignment systems and identified different issues related to the process. We have also conducted experiments to assess the impact of user errors in the completion process. While the completion of ontologies and ontology networks can be done at any point in the life-cycle of ontologies and ontology networks, some of the issues can be addressed already in the development phase. The third contribution of the thesis addresses this by introducing ontology completion and ontology alignment into an existing ontology development methodology.

Machine Learning-Based Bug Handling in Large-Scale Software Development

Machine Learning-Based Bug Handling in Large-Scale Software Development PDF Author: Leif Jonsson
Publisher: Linköping University Electronic Press
ISBN: 9176853063
Category :
Languages : en
Pages : 120

Get Book

Book Description
This thesis investigates the possibilities of automating parts of the bug handling process in large-scale software development organizations. The bug handling process is a large part of the mostly manual, and very costly, maintenance of software systems. Automating parts of this time consuming and very laborious process could save large amounts of time and effort wasted on dealing with bug reports. In this thesis we focus on two aspects of the bug handling process, bug assignment and fault localization. Bug assignment is the process of assigning a newly registered bug report to a design team or developer. Fault localization is the process of finding where in a software architecture the fault causing the bug report should be solved. The main reason these tasks are not automated is that they are considered hard to automate, requiring human expertise and creativity. This thesis examines the possi- bility of using machine learning techniques for automating at least parts of these processes. We call these automated techniques Automated Bug Assignment (ABA) and Automatic Fault Localization (AFL), respectively. We treat both of these problems as classification problems. In ABA, the classes are the design teams in the development organization. In AFL, the classes consist of the software components in the software architecture. We focus on a high level fault localization that it is suitable to integrate into the initial support flow of large software development organizations. The thesis consists of six papers that investigate different aspects of the AFL and ABA problems. The first two papers are empirical and exploratory in nature, examining the ABA problem using existing machine learning techniques but introducing ensembles into the ABA context. In the first paper we show that, like in many other contexts, ensembles such as the stacked generalizer (or stacking) improves classification accuracy compared to individual classifiers when evaluated using cross fold validation. The second paper thor- oughly explore many aspects such as training set size, age of bug reports and different types of evaluation of the ABA problem in the context of stacking. The second paper also expands upon the first paper in that the number of industry bug reports, roughly 50,000, from two large-scale industry software development contexts. It is still as far as we are aware, the largest study on real industry data on this topic to this date. The third and sixth papers, are theoretical, improving inference in a now classic machine learning tech- nique for topic modeling called Latent Dirichlet Allocation (LDA). We show that, unlike the currently dominating approximate approaches, we can do parallel inference in the LDA model with a mathematically correct algorithm, without sacrificing efficiency or speed. The approaches are evaluated on standard research datasets, measuring various aspects such as sampling efficiency and execution time. Paper four, also theoretical, then builds upon the LDA model and introduces a novel supervised Bayesian classification model that we call DOLDA. The DOLDA model deals with both textual content and, structured numeric, and nominal inputs in the same model. The approach is evaluated on a new data set extracted from IMDb which have the structure of containing both nominal and textual data. The model is evaluated using two approaches. First, by accuracy, using cross fold validation. Second, by comparing the simplicity of the final model with that of other approaches. In paper five we empirically study the performance, in terms of prediction accuracy, of the DOLDA model applied to the AFL problem. The DOLDA model was designed with the AFL problem in mind, since it has the exact structure of a mix of nominal and numeric inputs in combination with unstructured text. We show that our DOLDA model exhibits many nice properties, among others, interpretability, that the research community has iden- tified as missing in current models for AFL.

Distributed Moving Base Driving Simulators

Distributed Moving Base Driving Simulators PDF Author: Anders Andersson
Publisher: Linköping University Electronic Press
ISBN: 9176850900
Category :
Languages : en
Pages : 42

Get Book

Book Description
Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.