ZhiCheng0326
commited on
Commit
•
85f5459
1
Parent(s):
96ec7d8
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
pipeline_tag: text2text-generation
|
5 |
+
tags:
|
6 |
+
- semantic parsing
|
7 |
+
- bart
|
8 |
+
- KoPL
|
9 |
+
---
|
10 |
+
# Model Card for Model ID
|
11 |
+
|
12 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
13 |
+
|
14 |
+
This model fine-tuned from Bart-base model for semantic parsing task which converts natural language question into logic forms named KoPL program. The model is fine-tuned on [KQA Pro dataset](https://aclanthology.org/2022.acl-long.422/).
|
15 |
+
|
16 |
+
## Model Details
|
17 |
+
|
18 |
+
### Model Description
|
19 |
+
- **Model type:** Semantic parsing model
|
20 |
+
- **Language(s) (NLP):** English
|
21 |
+
- **Finetuned from model:** Bart-base
|
22 |
+
|
23 |
+
## How to Get Started with the Model
|
24 |
+
|
25 |
+
Refer code below to get started with the model.
|
26 |
+
|
27 |
+
[Github Link](https://github.com/THU-KEG/DiaKoP/blob/main/backend-src/semantic_parser.py)
|
28 |
+
|
29 |
+
## Citation
|
30 |
+
**BibTeX:**
|
31 |
+
```
|
32 |
+
@inproceedings{cao-etal-2022-kqa,
|
33 |
+
title = "{KQA} Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base",
|
34 |
+
author = "Cao, Shulin and
|
35 |
+
Shi, Jiaxin and
|
36 |
+
Pan, Liangming and
|
37 |
+
Nie, Lunyiu and
|
38 |
+
Xiang, Yutong and
|
39 |
+
Hou, Lei and
|
40 |
+
Li, Juanzi and
|
41 |
+
He, Bin and
|
42 |
+
Zhang, Hanwang",
|
43 |
+
editor = "Muresan, Smaranda and
|
44 |
+
Nakov, Preslav and
|
45 |
+
Villavicencio, Aline",
|
46 |
+
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
47 |
+
month = may,
|
48 |
+
year = "2022",
|
49 |
+
address = "Dublin, Ireland",
|
50 |
+
publisher = "Association for Computational Linguistics",
|
51 |
+
url = "https://aclanthology.org/2022.acl-long.422",
|
52 |
+
doi = "10.18653/v1/2022.acl-long.422",
|
53 |
+
pages = "6101--6119",
|
54 |
+
abstract = "Complex question answering over knowledge base (Complex KBQA) is challenging because it requires various compositional reasoning capabilities, such as multi-hop inference, attribute comparison, set operation, etc. Existing benchmarks have some shortcomings that limit the development of Complex KBQA: 1) they only provide QA pairs without explicit reasoning processes; 2) questions are poor in diversity or scale. To this end, we introduce KQA Pro, a dataset for Complex KBQA including around 120K diverse natural language questions. We introduce a compositional and interpretable programming language KoPL to represent the reasoning process of complex questions. For each question, we provide the corresponding KoPL program and SPARQL query, so that KQA Pro can serve for both KBQA and semantic parsing tasks. Experimental results show that state-of-the-art KBQA methods cannot achieve promising results on KQA Pro as on current datasets, which suggests that KQA Pro is challenging and Complex KBQA requires further research efforts. We also treat KQA Pro as a diagnostic dataset for testing multiple reasoning skills, conduct a thorough evaluation of existing models and discuss further directions for Complex KBQA. Our codes and datasets can be obtained from \url{https://github.com/shijx12/KQAPro_Baselines}.",
|
55 |
+
}
|
56 |
+
```
|