Bridging the Gap between Knowledge Graphs and LLMs for Multi-hop Question Answering
Published in Proceedings of the 34th ACM International Conference on Information and Knowledge Management (CIKM2025), 2025
Abstract: To achieve multi-hop question answering over knowledge graphs (KGQA), many studies have explored converting retrieved subgraphs into textual form and feeding them into large language models (LLMs) to leverage their reasoning capabilities. However, due to the linear and discrete nature of text sequences, model performance may degrade when handling complex questions. To this end, we propose a novel structure-text knowledge synergistic method, BrikQA, which bridges the knowledge gap between knowledge graphs (KGs) and LLMs for multi-hop KGQA. LLMs and KGs complement each other by leveraging explicit topological patterns and implicit knowledge mining to enhance knowledge understanding and address sparsity issues. Experimental results on various datasets demonstrate that BrikQA outperforms state-of-the-art baselines. Our source code is available at \url{https://github.com/shijielaw/BrikQA}.
Citation: Shijie Luo, Xinyuan Lu, Qinpei Zhao, and Weixiong Rao. 2025. Bridging the Gap between Knowledge Graphs and LLMs for Multi-hop Question Answering. In Proceedings of the 34th ACM International Conference on Information and Knowledge Management (CIKM ’25), November 10–14, 2025, Seoul, Republic of Korea. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3746252.3760973.
You can download this paper in BrikQA.