File size: 1,246 Bytes
7c1cac9 eff38f4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
license: llama2
datasets:
- jzfeng/LoGiPT-data
language:
- en
pipeline_tag: question-answering
tags:
- logical reasoning
- reasoning
---
## Model Details
These are the trained models for **LoGiPT** from NAACL'24 paper: *"Language Models can be Deductive Solvers"*.
- LoGiPT-[A]-[B]: The specific model version of LoGiPT
- [A]: The backbone model, which can be 'vicuna-13b-v1.5-16k', 'CodeLlama-13b-hf' or 'CodeLlama-13b-Instruct-hf'.
- [B]: The training data, which can be 'proofwriter' or 'prontoqa'.
All models are organised in Vicuna-style and trained by [FastChat-0.2.30](https://github.com/lm-sys/FastChat).
All training examples are organised in Json-format and Vicuna-style in [jzfeng/LoGiPT-data](https://huggingface.co/datasets/jzfeng/LoGiPT-data).
### If you find these models helpful, please cite our NAACL'24 paper: (or Arxiv version: https://arxiv.org/abs/2311.06158)
```shell
@inproceedings{feng2024language,
title={Language Models can be Deductive Solvers},
author={Feng, Jiazhan and Xu, Ruochen and Hao, Junheng and Sharma, Hiteshi and Shen, Yelong and Zhao, Dongyan and Chen, Weizhu},
booktitle={Findings of the Association for Computational Linguistics: NAACL 2024},
pages={4026--4042},
year={2024}
}
``` |