KR2023Proceedings of the 20th International Conference on Principles of Knowledge Representation and ReasoningProceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning

Rhodes, Greece. September 2-8, 2023.

Edited by

ISSN: 2334-1033
ISBN: 978-1-956792-02-7

Sponsored by
Published by

Copyright © 2023 International Joint Conferences on Artificial Intelligence Organization

Explainable Representations for Relation Prediction in Knowledge Graphs

  1. Rita T. Sousa(LASIGE, Faculdade de Ciências da Universidade de Lisboa)
  2. Sara Silva(LASIGE, Faculdade de Ciências da Universidade de Lisboa)
  3. Catia Pesquita(LASIGE, Faculdade de Ciências da Universidade de Lisboa)

Keywords

  1. Explainable AI
  2. Applications of KR in bioinformatics
  3. Applications of KR in semantic web, knowledge graphs

Abstract

Knowledge graphs represent real-world entities and their relations in a semantically-rich structure supported by ontologies. Exploring this data with machine learning methods often relies on knowledge graph embeddings, which produce latent representations of entities that preserve structural and local graph neighbourhood properties, but sacrifice explainability. However, in tasks such as link or relation prediction, understanding which specific features better explain a relation is crucial to support complex or critical applications.

We propose SEEK, a novel approach for explainable representations to support relation prediction in knowledge graphs. It is based on identifying relevant shared semantic aspects (i.e., subgraphs) between entities and learning representations for each subgraph, producing a multi-faceted and explainable representation.

We evaluate SEEK on two real-world highly complex relation prediction tasks: protein-protein interaction prediction and gene-disease association prediction.

Our extensive analysis using established benchmarks demonstrates that SEEK achieves comparable or even superior performance to standard learning representation methods while identifying both sufficient and necessary explanations based on shared semantic aspects.