\section{Related Work}

Knowledge Base Question Answering. 
Existing Knowledge Base Question Answering (KBQA) methods can be broadly categorized into Information Retrieval-based (IR-based) and Semantic Parsing-based (SP-based) methods.

IR-based methods
Bridging the KB-Text Gap: Leveraging Structured Knowledge-aware Pre-training for KBQA 
Subgraph retrieval enhanced model for multi-hop knowledge base question answering 
Large-scale relation learning for question answering over knowledge bases with pre-trained language models.
TransferNet: An effective and transparent framework for multi-hop question answering over relation graph.
Improving multi-hop knowledge base question answering by learning intermediate supervision signals.

LLM+IR
Structgpt: A general framework for large language model to reason over structured data
Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph

Semantic Parsing-based methods
Outlining and filling: Hierarchical query graph generation for answering complex questions over knowledge graphs
Beamqa: Multi-hop knowledge graph question answering with sequence-to-sequence prediction and beam search.
UniKGQA: Unified retrieval and reasoning for solving multi-hop question answering over knowledge graph

SP - 生成式模型
ReTraCk: A flexible and efficient framework for knowledge base question answering.
Case-based reasoning for natural language queries over knowledge bases.
RNG-KBQA: Generation augmented iterative ranking for knowledge base question answering.
Program transfer for answering complex questions over knowledge bases.
TIARA: Multi-grained retrieval for robust question answering over large knowledge base.
ArcaneQA: Dynamic program induction and contextualized encoding for knowledge base question answering.
Logical form generation via multi-task learning for complex question answering over knowledge bases.
Uni-parser: Unified semantic parser for question answering on knowledge base and database.
UnifiedSKG: Unifying and multi-tasking structured knowledge grounding with text-to-text language models.
DecAF: Joint decoding of answers and logical forms for question answering over knowledge bases.
FC-KBQA: A fine-to-coarse composition framework for knowledge base question answering.

LLM+SP
Don’t Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments
ChatKBQA- A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models



提llm和tool learning





FC-KBQA (Zhang et al., 2023) introduces a Fine-to-Coarse composition framework for question answering over knowledge bases, utilizing fine-grained component detection, middle-grained component constraints, and coarse-grained component composition.

ToG (Sun et al., 2023) integrates LLMs with KGs for deep and responsible reasoning, using a beam search algorithm in KG/LLM reasoning, which allows the LLM to dynamically explore multiple reasoning paths in KG and make decisions accordingly, enhancing LLMs’ deep reasoning capabilities for knowledge-intensive tasks.


LLM
StructGPT (Jiang et al., 2023a) enhances LLMs’ reasoning over structured data using an Iterative Reading-then-Reasoning (IRR) approach, which includes specialized interfaces for efficient data access, a novel invoking-linearization-generation procedure, and iterative reasoning to effectively utilize structured data in answering complex questions.
