jzfeng's picture
Update README.md
eff38f4 verified
metadata
license: llama2
datasets:
  - jzfeng/LoGiPT-data
language:
  - en
pipeline_tag: question-answering
tags:
  - logical reasoning
  - reasoning

Model Details

These are the trained models for LoGiPT from NAACL'24 paper: "Language Models can be Deductive Solvers".

  • LoGiPT-[A]-[B]: The specific model version of LoGiPT
    • [A]: The backbone model, which can be 'vicuna-13b-v1.5-16k', 'CodeLlama-13b-hf' or 'CodeLlama-13b-Instruct-hf'.
    • [B]: The training data, which can be 'proofwriter' or 'prontoqa'.

All models are organised in Vicuna-style and trained by FastChat-0.2.30.

All training examples are organised in Json-format and Vicuna-style in jzfeng/LoGiPT-data.

If you find these models helpful, please cite our NAACL'24 paper: (or Arxiv version: https://arxiv.org/abs/2311.06158)

@inproceedings{feng2024language,
  title={Language Models can be Deductive Solvers},
  author={Feng, Jiazhan and Xu, Ruochen and Hao, Junheng and Sharma, Hiteshi and Shen, Yelong and Zhao, Dongyan and Chen, Weizhu},
  booktitle={Findings of the Association for Computational Linguistics: NAACL 2024},
  pages={4026--4042},
  year={2024}
}