Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,27 @@ pipeline_tag: question-answering
|
|
8 |
tags:
|
9 |
- logical reasoning
|
10 |
- reasoning
|
11 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
tags:
|
9 |
- logical reasoning
|
10 |
- reasoning
|
11 |
+
---
|
12 |
+
|
13 |
+
## Model Details
|
14 |
+
|
15 |
+
These are the trained models for **LoGiPT** from NAACL'24 paper: *"Language Models can be Deductive Solvers"*.
|
16 |
+
|
17 |
+
- LoGiPT-[A]-[B]: The specific model version of LoGiPT
|
18 |
+
- [A]: The backbone model, which can be 'vicuna-13b-v1.5-16k', 'CodeLlama-13b-hf' or 'CodeLlama-13b-Instruct-hf'.
|
19 |
+
- [B]: The training data, which can be 'proofwriter' or 'prontoqa'.
|
20 |
+
|
21 |
+
All models are organised in Vicuna-style and trained by [FastChat-0.2.30](https://github.com/lm-sys/FastChat).
|
22 |
+
|
23 |
+
All training examples are organised in Json-format and Vicuna-style in [jzfeng/LoGiPT-data](https://huggingface.co/datasets/jzfeng/LoGiPT-data).
|
24 |
+
|
25 |
+
### If you find these models helpful, please cite our NAACL'24 paper: (or Arxiv version: https://arxiv.org/abs/2311.06158)
|
26 |
+
```shell
|
27 |
+
@inproceedings{feng2024language,
|
28 |
+
title={Language Models can be Deductive Solvers},
|
29 |
+
author={Feng, Jiazhan and Xu, Ruochen and Hao, Junheng and Sharma, Hiteshi and Shen, Yelong and Zhao, Dongyan and Chen, Weizhu},
|
30 |
+
booktitle={Findings of the Association for Computational Linguistics: NAACL 2024},
|
31 |
+
pages={4026--4042},
|
32 |
+
year={2024}
|
33 |
+
}
|
34 |
+
```
|