entslscheia
commited on
Commit
•
1f8776b
1
Parent(s):
69ca534
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,20 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
+
task_categories:
|
4 |
+
- question-answering
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
size_categories:
|
8 |
+
- n<1K
|
9 |
---
|
10 |
+
|
11 |
+
|
12 |
+
In traditional knowledge base question answering (KBQA) methods, semantic parsing plays a crucial role. It requires requires a semantic parser to be extensively trained on a vast dataset of labeled examples, typically consisting of question-answer or question-program pairs. This necessity arises primarily because smaller models before the advent of large language models (LLMs) were data-hungry, needing extensive data to effectively master tasks. Additionally, these methods often relied on the assumption that data is independent and identically distributed (i.i.d.), meaning the questions a model could answer needed to match the distribution of the training data. This demanded training data to cover a broad spectrum of the KB's entities and relationships for the model to understand it adequately.
|
13 |
+
|
14 |
+
However, the rise of LLMs has shifted this paradigm. LLMs excel in learning from few (or even zero) in-context examples. They utilize natural language as a general vehicle of thought, enabling them to actively navigate and interact with KBs using auxiliary tools, without the need for training on comprehensive datasets. This advance suggests LLMs can sidestep the earlier limitations and eliminate the dependency on extensive, high-coverage training data.
|
15 |
+
|
16 |
+
Such a paradigm is usually encapsulated in the term "language agent" or "LLM agent." Existing KBQA datasets may not be ideal to evaluate this new paradigm for two reasons: 1) Many questions are single-hop queries over the KB, which fails to sufficiently challenge the capabilities of LLMs, and 2) Established KBQA benchmarks contain tens of thousands of test questions. Evaluating the most capable models like GPT-4 on so many questions would be extremely costly and often unnecessary.
|
17 |
+
|
18 |
+
As a result, we curate KBQA-Agent to offer a more targeted KBQA evaluation for language agents. KBQA-Agent contains 500 complex questions over Freebase from three existing KBQA datasets: GrailQA, ComplexWebQuestions, and GraphQuestions. To further support future research, we also provide the ground truth action sequence (i.e., tool invocations) for the language agent to take to answer each question.
|
19 |
+
|
20 |
+
To use KBQA-Agent, please remember to cite the original source datasets as well.
|