\section{Introduction}

Knowledge-Based Question Answering (KBQA) is a task that aims to answer natural language questions given a knowledge base (KB).
KBQA has a wide range of applications and has won wide attention in academia and industry. 

Recent KBQA works 主要由两类方法 IR和Semantic Parsing
For IR 方法，a series of works 【cite】 怎么做的？
For Semantic Parsing 方法，【cite】used xxx 【cite】applied xxx，Very recently, 【cite】use xxx
随着Chatgpt的出现 【cite】使用了llm完成kbqa achieving state-of-the-art performance，achieving state-of-the-art performance.

However, KBQA still faces three major challenges:
1. 难以处理复杂约束
The existing IR methods mainly xxx，However, 

2. Low resources
For training a accurate semantic parser, traditional supervised methods tended to annotate large-scale question-logical form pairs. However, the labor cost is enormous.  How to answer questions without sufficient resources is still under-explored to date.

3. 对LLM的充分利用
The advanced generative large language models 已经被证明具有一定的推理能力。Nonetheless，existing works mainly use LLMs to 做简单的谓词选择，而没有充分利用LLM的推理能力。Therefore, how to leverage the strengths of LLM to answer questions accurately and explainable ，或者帮助人们标注kbqa数据，仍然是一个值得研究的问题。



To address the challenges mentioned above, we propose LLM-KGQA, an dialog-form kbqa Approach. Figure \ref{fig:model_overview} shows the overall process.

Firstly, 本文提出了一个使用LLM解决复杂KBQA问题的框架，将复杂问题拆解成多个子问题，调用工具和KB交互，step by step地寻找必要的信息，最后生成sparql。
Secondly，本文探索了一种对话形式的kbqa方法，只需要标注极少的demos，就可以提示LLM完成kbqa任务。
LLMkbqa combines the strengths of sparql语句表达复杂问题的能力 和 LLM的推理能力。
Specifically，本文在3个db上构建了相同的交互api，设计了一套统一的交互方式，证明了本方法的通用性
Lastly, 本文基于对话形式的KBQA方法，人工标注了kbqa推理过程，能够在开源模型上进行fine-tune，进一步提升在复杂问题上的表现。

The main contributions of this paper are summarized as follows. 
\begin{itemize}
    \item 
\end{itemize}