--- license: cc-by-4.0 task_categories: - question-answering language: - en size_categories: - n<1K --- **Introduction** In traditional knowledge base question answering (KBQA) methods, semantic parsing plays a crucial role. It requires a semantic parser to be extensively trained on a vast dataset of labeled examples, typically consisting of question-answer or question-program pairs. However, the rise of LLMs has shifted this paradigm. LLMs excel in learning from few (or even zero) in-context examples. They utilize natural language as a general vehicle of thought, enabling them to actively navigate and interact with KBs using auxiliary tools, without the need for training on comprehensive datasets. This advance suggests LLMs can sidestep the earlier limitations and eliminate the dependency on extensive, high-coverage training data. Such a paradigm is usually encapsulated in the term "language agent" or "LLM agent". Existing KBQA datasets may not be ideal to evaluate this new paradigm for two reasons: 1) Many questions are single-hop queries over the KB, which fails to sufficiently challenge the capabilities of LLMs, and 2) Established KBQA benchmarks contain tens of thousands of test questions. Evaluating the most capable models like GPT-4 on so many questions would be extremely costly and often unnecessary. As a result, we curate KBQA-Agent to offer a more targeted KBQA evaluation for language agents. KBQA-Agent contains 500 complex questions over Freebase from three existing KBQA datasets: GrailQA, ComplexWebQuestions, and GraphQuestions. To further support future research, we also provide the ground truth action sequence (i.e., tool invocations) for the language agent to take to answer each question. **Split** KBQA-Agent targets a training-free setting (we used a one-shot demo in our original experiments), so there is only one split of the test set. **Dataset Structure** - **qid:** The unique id of a question - **s-expression:** The ground truth logical form, where we derive the ground truth actions from - **answer:** The list of answer entities - **question:** The input question - **actions:** The ground truth sequence of actions, derived from the s-expression - **entities:** The topic entities mentioned in the question - **source:** The source of the question (e.g., GrailQA) **Citation** If our paper or related resources prove valuable to your research, we kindly ask for citation. Please feel free to contact us with any inquiries. ``` @article{Gu2024Middleware, author = {Yu Gu, Yiheng Shu, Hao Yu, Xiao Liu, Yuxiao Dong, Jie Tang, Jayanth Srinivasa, Hugo Latapie, Yu Su}, title = {Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments}, journal = {arXiv preprint arXiv: 2402.14672}, year = {2024} } ``` Please also cite original sources of KBQA-Agent: **GrailQA:** ``` @inproceedings{grailqa, author = {Yu Gu, Sue Kase, Michelle Vanni, Brian M. Sadler, Percy Liang, Xifeng Yan, Yu Su}, title = {Beyond {I.I.D.:} Three Levels of Generalization for Question Answering on Knowledge Bases}, booktitle = {WWW '21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021}, year = {2021} } ``` **ComplexWebQ:** ``` @inproceedings{cwq, author = {Alon Talmor, Jonathan Berant}, title = {The Web as a Knowledge-Base for Answering Complex Questions}, booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers)}, year = {2018} } ``` **GraphQuestions:** ``` @inproceedings{graphq, author = {Yu Su, Huan Sun, Brian M. Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, Xifeng Yan}, title = {On Generating Characteristic-rich Question Sets for QA Evaluation}, booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016}, year = {2016} } ```